Hacker Newsnew | past | comments | ask | show | jobs | submit | evdubs's commentslogin

Their valuation is crazy. Trading at $260, with no profit, their price:sales ratio was 36x (extremely high). Now, trading at $195, their price:forward earnings ratio is 130x (extremely high). Unless they crushed earnings and revenue estimates and juiced their forward guidance, Cloudflare stock could have gone anywhere. Also, the stock is trading where it was a month ago.

Indeed. COIN releases earnings on May 7 in the evening. Q4 2025 was the first quarter where they had a negative EPS in the past couple years. Most analyst estimates for Q1 2026 are trending downward. This "difficult decision" seems to be all about getting in front of a bad earnings release.

With the hope of receiving a reply with proof....

There is absolutely no way a 3 month old is potty trained. As in, the 3 month old infant can communicate and use a toilet. They likely can't even hold their head up at that age.

https://www.babycenter.com/baby/diapering/infant-potty-train... indicates that a potty can be introduced from 4- to 6-months old. Potty trained by 18 months is much more reasonable.


> Lisp hackers have been effortlessly reshaping the language for decades using the powerful macro system and extending and bending the language to their will.

I've written a bit of Racket code (https://github.com/evdubs?tab=repositories&q=&type=&language...) and I still haven't written a macro. In only one case did I even think a macro would be useful: merging class member definitions to include both the type and the default value on the same line. It's sort of a shame that Racket, a Scheme with a much larger standard library and many great user-contributed libraries, has to deal with the Scheme/Lisp marketing of "you can build low level tools with macros" when it's more likely that Racket developers won't need to write macros since they're already written and part of the standard library.

> But the success of Parsec has filled Hackage with hundreds of bespoke DSLs for everything. One for parsing, one for XML, one for generating PDFs. Each is completely different, and each demands its own learning curve. Consider parsing XML, mutating it based on some JSON from a web API, and writing it to a PDF.

What a missed opportunity to preach another gospel of Lisp: s-expressions. XML and JSON are forms of data that are likely not native to the programming language you're using (the exception being JSON in JavaScript). What is better than XML or JSON? s-expressions. How do Lisp developers deal with XML and JSON? Convert it to s-expressions. What about defining data? Since you have s-expressions, you aren't limited to XML and JSON and you can instead use sorted maps for your data or use proper dates for your data; you don't need to fit everything into the array, hash, string, and float buckets as you would with JSON.

If you've been hearing about Lisp and you get turned off by all of this "you can build a DSL and use better macros" marketing, Racket has been a much more comfortable environment for a developer used to languages with large standard libraries like Java and C#.


How do Lisp developers deal with XML and JSON? Convert it to s-expressions.

As a common lisp developer, that is only very vaguely true for me.

The mapping I prefer for json<->Lisp is:

  true:  t
  false: nil
  null:  :null
  []     #()
  {}     (make-hash-table :test #'equal)
This falls out of my desire for the mapping to be bijective:

- The only built-in type that is unambiguously a mapping type is hash-tabe.

- nil is the only value that is falsy in CL

- () is the same as nil, so we can't use it as an empty list; vectors are the obvious alternative

- Not really any obvious values left to use for "null" so punt to a keyword.


In Kernel I would use something like this:

    true        #t
    false       #f
    null        ()
    [...]       (& ...)
    "k" : v     (: k v)
    {...}       (@ ...)  
Where &, :, @ are defined as:

    ($define! &
        ($lambda args (cons list args)))

    ($define! : 
        ($vau (key value) env
            (list key (eval value env))))
            
    ($define! @ 
        (wrap 
            ($vau kvpairs env 
                (eval (list* $bindings->environment kvpairs) env))))
Using the "person" example from the JSON/syntax section on Wikipedia:

    ($define! person
        (@
            (: first_name "John")
            (: last_name "Smith")
            (: is_alive #t)
            (: age 27)
            (: address 
                (@
                    (: street_address "21 2nd Street")
                    (: city "New York")
                    (: state "NY")
                    (: postal_code "10021-3100")))
            (: phone_numbers
                (& (@ (: type "home") (: number "212 555-1234"))
                   (@ (: type "office") (: number "646 555-4567"))))
            (: children
                (& "Catherine" "Thomas" "Trevor"))
            (: spouse ())))
I would then define `?`

    ($define! ? $remote-eval)
Now we can query the object.

    > (? age person)
    27

    > (? postal_code (? address person))
    "10021-3100"

    > (car (? children person))
    "Catherine"

    > (cdr (? children person))
    ("Thomas" "Trevor")

    > (? type (cadr (? phone_numbers person)))
    "office"

    > (? number (car (? phone_numbers person)))
    "212 555-1234"

    > ($define! full_name ($lambda (p) (string-append (? first_name p) " " (? last_name p))))
    > (full_name person)
    "John Smith"


I don't know kernel very well; what will the value of person print as?

There's no "external representation" for environments. In klisp it will just print:

    [#environment]
The environment type is encapsulated, so it doesn't give you very useful debug information.

Perhaps having `@` produce an environment is the wrong approach and we should just produce an association list instead - then move `$bindings->environment` into the `?` operative to enable querying.


In Clojure

    true: true
    false: false
    null: nil
    []: []
    {}: {}

Sometime back 15 years ago [0], I hit a bit of an existential crisis regarding my career and the kind of work I was doing.

I thought the particular technology I was working in was "part of the problem", as I felt pigeon-holed by .NET and C# to always be a corporate-monkey CRUD consultant. So, I went out in search of something better. Different programming languages. Different environments. Just something that wasn't working for asshole clients who thought it was okay to yell at people about an outage in a hotel on the complete opposite side of the country that was more due to local radio interference than anything I had done in the database code that configured things. Long story involving missing a holiday with my family over something completely outside of my control and yet I still got blamed for it. The problem wasn't the technology, it was the company I was working for, but at that time in my life, I didn't understand the difference.

Racket was a life preserver at that time.

It's really hard to explain, because I never actually ended up working in Racket full-time and I haven't even touched it in probably 10 years. But it still has this impact on my identity as a software developer. I learned Racket. I forced myself out of being a Glub programmer and into someone who saw the strings that underwrote The Universe. The beauty of S-Expressions and syntactic forms and code-is-data and all that. It had a permanent impact on my view of what this job could be.

I still work primarily in .NET. Most of the things that were technological issues about .NET Framework got absolved by what was first .NET Core and what is now .NET. So, I no longer feel like my tools are holding me back. And I'll forever be thankful to Racket (and the community! The Racket listserve was amazing back then. Probably still is, I just don't interact with it anymore) for being there for me.

Edit: Haskell was in fact another language I explored at that time, in addition to Ocaml and Ruby and Python (ugh! Don't get me started on Python!) and many other things. They were all "cool" in their own way, but nothing felt like Racket. They all had their own weird rules that felt like being bossed at again. Racket felt like art. Racket felt like it was there for me, not the other way around.

[0] I still think of this time as the "mid-point" in my career, but it's now been long enough ago that I've been more past the crisis than I was ever in it. Strange feelings.


> [...] who thought it was okay to yell at people about [...]

That society as a whole accepts this kind of abuse, no matter industry or circumstances, is beyond me. It's an abuse of power. If anybody did this to anyone, the only appropriate response should be to walk and never come back. Nobody would want to accept this kind of crap from family and friends, so why is it ok in a professional setting? Because of the money/power dynamics at play? We need consensus in society to walk, that would end it in no time.


I had the "good fortune" to lose everything I owned in an flood shortly after that incident. The insurance payout paid off all my debts. I was single and had no material attachments to the world. I suddenly felt no compunction to suffer any disrespect anymore.

Got fired shortly thereafter for basically refusing to commit timesheet fraud. That's when I went on my "programming language walkabout." It was an amazing time.

Now I have a wife and kids and a mortgage. But also now I'm the boss, working hard to not inflict the same bullshit on my people that was inflicted upon me.


> Nobody would want to accept this kind of crap from family and friends

Hm… I think I have bad news for you.


Yes, often you're the pressure-release valve for urges that friends and family otherwise suppress. Especially family.

For what it's worth, anytime I have written a macro it's usually not because it's needed, but just because I think it'll be fun :)

When I learned Scheme, I liked the language but strongly disliked macros and quotation. I'd only been using it a short while and when I searched for solutions to a few problems these "fexpr" things kept appearing up, which i didn't understand, and this "Kernel" language. I decided to learn it since "fexprs" were apparently the solution to several of my problems. This wasn't easy at first - I had to read the Kernel Report several times, but I ended up finding it way more intuitive than using macros and quotes.

I've not written a Scheme macro since. I've written hundreds of Kernel operatives though.

I was also a typoholic previously, but am in remission now thanks to Kernel.

https://web.cs.wpi.edu/~jshutt/kernel.html


Think of macros as what you want when you want to perform computation at compile time rather than run time.

An example: building the equivalent of a switch statement, but that compares (via string equality) with a set of strings. The macro would translate this into code that would do something like a decision tree on string length or particular characters at particular positions.

Basically anything that's done with a preprocessor in another language can be done with macros in Lisp family languages.


The other motivation for me is to drastically reduce boilerplate code. I can’t believe people here are saying they never use macros, they are so good for this that avoiding them sounds to me like a skill issue! Overuse can damage readability, sure, but so can pretending macros are not an option.

Operatives do that for me, better than macros. Parent is correct that macros are compile time, which gives them a performance advantage over operatives - but IMO, they're not better ergonomically. I find operatives simpler, cleaner and more powerful.

I hadn't heard of operatives. Can you describe them or provide a link?

Operatives are based on FEXPRS from older lisps - they're basically a function-like form, but where the operands are not implicitly reduce at the time of call.

    (foo (+ 2 3) (* 3 4))
    ($bar (+ 2 3) (* 3 4))
`foo` is a function, when it is combined with the arguments, it receives the values 5 and 7.

`$bar` however, receives its operands verbatim. It receives (+ 2 3) as its first operand and (* 3 4) as its second - unevaluated.

The operative/FEXPR body decides how to evaluate the operands - if at all.

The difference between an operative/FEXPR and a macro is that macros are second-class objects which must appear in their own name - we cannot assign them to variables, pass them or return them from functions. Operatives and FEXPRs are first-class objects that can be treated like any other.

The difference between FEXPRs and Operatives is to do with scoping and environments. FEXPRs were around before Scheme - when Lisps were dynamically scoped. This meant we could have unpredictable behavior and so called "spooky action at distance". They were problematic and basically abandoned almost entirely in the 1980s.

Shutt introduced Operatives as a more hygienic version - based on statically scoped Scheme. Instead of the operative being able to mutate the dynamic environment arbitrarily, there are limitations. The first part of this is that environments are made into first-class objects - so we can assign them to a symbol and pass them around. The final part is that an operative receives a reference to the dynamic environment of its caller - which we bind to a symbol using the operative constructor, `$vau`.

    ($vau (operands) dynamic-env . body)
Compare to:

    ($lambda (arguments) . body)
So operatives are called in the same way a function is called - but the operands are not reduced, and the environment is passed implicitly.

The body can decide to evaluate the operands using the environment of the caller - essentially behaving as if the caller had evaluated them

    (eval operands dynamic-env)

But it can chose other evaluation strategies for the operands - such as evaluating them in a custom created environment which we can make with (make-environment) or ($bindings->environment).

This also allows the operative to mutate the environment of its callee - but only the locals of that environment. The parent environments cannot be mutated through the reference `dynamic-env`.

Technically, `$lambda` is not primitive in Kernel - though it is the main constructor of applicatives (functions) - the primitive constructor is called `wrap` - and it takes another combiner (an operative or applicative) as its parameter. Wrapping a combiner simply forces the evaluation of its arguments when called - so functions are just wrappers around operatives - and the underlying operative of any function can be extracted with `unwrap`.

There's a lot more to them. They're conceptually quite simple in terms of implementation, but they have enormous potential use cases that are unexplored.

Read more on the Kernel page[1]. In particular, the Kernel report[2]. There's also a formal calculus describing them, called the vau calculus[3].

[1]:https://web.cs.wpi.edu/~jshutt/kernel.html

[2]:https://ftp.cs.wpi.edu/pub/techreports/pdf/05-07.pdf

[3]:https://web.archive.org/web/20150224035948/http://www.wpi.ed...


Hmm, this sounds like exactly the opposite of what I was talking about. It delays execution rather than promoting execution to compile time.

What I had expected you to talk about was some way of getting the compile time execution of macros by a sufficiently smart compiler that could do extensive partial evaluation at compile time, including crossing procedure boundaries. Of course that's antithetical to the Lisp philosophy of allowing dynamic redefinition of functions and such.


In Common Lisp macros can also be used to implement a kind of Aspect-Oriented Programming, using the macroexpand hook. This hook enables macroexpansion to be dynamically modified at compile time without changing the source code.

I understand the use case, but Scheme macros never felt intuitive to me. I think it may be the quotation more than anything that I dislike - though I also dislike that they're second class (which was the key thing which led me to Kernel).

I use C preprocessor macros extensively and don't have the typical dislike for them that many people have - though I clearly understand their limitations and the advantage Scheme macros have over them.

Since learning Kernel, the boundary of "compile time" and "runtime" is more blurry - I can write operatives which behave somewhat like a macro, and I do more "multi-stage" programming, where one operative optimizes its argument to produce something more efficient which is later evaluated - though there are still limitations due to the inability to fully compile Kernel.

As one example, I've used a kind of operative I call a "template", which evaluates its free symbols ahead of time but doesn't actually evaluate the body. When we later apply the some operands it replaces the bound symbols with the operands, looking up any symbols to produce an expression which we don't need to immediately evaluate either - but this expression has all symbols fully resolved. This is somewhere between a macro and regular operative.

Consider:

    ($define! z 10)

    ($define! @add-z
        ($template (x y)
            (+ x y z)))
In this template `x` and `y` are bound variables and `+` and `z` are free. The template resolves the free symbols and returns an operative expecting 2 operands, effectively providing an operative with the body:

    ([#applicative: +] x y 10)
When we call the template with the two operands, it resolves any symbols in the arguments and returns the full expression with no symbols present, but it doesn't evaluate the expression yet.

    > ($let ((x 9)
             (y 7))
          (@add-z (* x 3) (- y 13)))
    ([#applicative: +] ([#applicative: *] 9 3) ([#applicative: -] 7 13) 10)
When we decide to evaluate the expression, no symbol lookup is necessary - it can perform the operation rather quickly, despite the slow interpretation.

---

The $template form above isn't too difficult to implement. I've iterated several forms of this - some which only partially resolved the bound symbols, but lost them in a RAID failure. An earlier version which has some issues I still have because I put it online:

    ($provide! ($template)
        ($define! $resolve-free-symbols
            ($vau (params expr) env
                ($cond
                    ((null? expr) ())
                    ((pair? expr)
                        (cons (apply (wrap $resolve-free-symbols) 
                                     (list params (car expr)) 
                                     env)
                              (apply (wrap $resolve-free-symbols) 
                                     (list params (cdr expr)) 
                                     env)))
                    ((symbol? expr)
                        ($if (member? expr params)
                             expr
                             (eval expr env)))
                    (#t expr))))

        ($define! $resolve-bound-symbols
            ($vau (params expr) env
                ($cond
                    ((null? expr) ())
                    ((pair? expr)
                        (cons (apply (wrap $resolve-bound-symbols)
                                     (list params (car expr))
                                     env)
                              (apply (wrap $resolve-bound-symbols)
                                     (list params (cdr expr))
                                     env)))
                    ((symbol? expr)
                        ($if (member? expr params)
                             (eval expr env)
                             expr))
                    (#t expr))))

        ($define! zip
            ($lambda (fst snd)
                ($cond
                    (($and? (null? fst) (null? snd)) ())
                    (($and? (pair? fst) (pair? snd))
                        (cons (list (car fst)
                                    (list* (($vau #ignore #ignore list)) (car snd)))
                              (zip (cdr fst) (cdr snd)))))))

        ($define! $template
            ($vau (params body) senv
                ($let ((newbody 
                        (eval (list $resolve-free-symbols params body) senv)))
                    ($vau args denv
                        (eval (list $resolve-bound-symbols params newbody)
                              (eval (list* $bindings->environment
                                           (zip params args))
                                    denv))))))) 

---

At present the best interpreter is klisp, and the fastest is bronze-age-lisp, which uses klisp - with parts of hand-written 32-bit x86 assembly.

I've been working on a faster interpreter for a number of years as a side project, optimized for x86_64 with some parts C and some parts assembly. It has diverged in some parts from the Kernel report, but still retains what I see are the key ingredients.

My modified Kernel has optional types, and we have operatives to `$typecheck` complex expressions ahead of evaluating them. I intend to go all in on the "multi-stage" aspect and have operatives to JIT-compile expressions in a manner similar to the above template.


Which implementation do you use?

I use klisp[1] and bronze-age-lisp[2] mostly for testing, as they're the closest to a feature complete implementation of the Kernel Report.

I've written a number of less complete interpreters over the years. I currently have a long-running side-project to provide a more complete, highly optimized implementation for x86_64.

[1]:https://github.com/dbohdan/klisp

[2]:https://github.com/ghosthamlet/bronze-age-lisp


This is my experience, too, and my solution has been to run Debian.


Doesn't snap come back on the next OS upgrade?

I was using Ubuntu and installed the apt version of Firefox as the snap version would not open html files in locations like /var/tmp and would not work with USB devices. Every time I ran `do-release-upgrade`, all of that work would need to be redone. It was very annoying.


I, too, have used it. It works well and is especially great for data sharing.


There must be a table with three columns and 4-6 rows.


You specify which software patents you want it to use?


AI reading the patent is basically cleanroom reverse engineering according to current AI IP standards :D


Patents aren't vulnerable to cleanroom reverse engineering. You can create something yourself in your bedroom and use it yourself without knowing the patented thing exists, and still violate the patent. That's why they're so scary.

You won't get caught if you write something yourself and use it yourself, but programmers (contrary to entrepreneurs) have a pattern of avoiding illegal things instead of avoiding getting caught.


It's not a perfect joke I'll admit.


The sad part is that most software patents are so woefully underspecified and content-free that even Claude might have trouble coming up with an actual implementation.


But it ultimately doesn't even matter because they contain nothing of value anyway. For example googling G0F6 in google patents yields this weird one from yesterday.

https://patents.google.com/patent/US12411877B1/en?q=(G06F)&c...

This shit patent is effectively claiming to have invented a "layer" that takes user prompts in a service, determines if the prompts need to be responded to in "real time mode", and if so route the prompt to an LLM that runs quickly and return the results. (As opposed to some batched api I suppose?).

I mean this is just routing requests based on if the query is prioritized. Its a patent claiming to have invented an IF statement. Most patents are of this quality or worse.

Might as well read VixRa papers for better ideas. And I mean this sincerely, because at least they aren't as obfuscated and the authors at least pretend to have ideas.


Patterns?


Tha was my assumption as well.

I caught iOS trying to autocorrect something I wrote twice yesterday, and somehow before I hit submit it managed it a third time, and I had to edit it after, where it tried three more times to change it back.

Autocorrect won’t be happy until we all sound like idiots and I wonder if that’s part of how they plan to do away with us. Those hairless apes can’t even use their properly.


Yeah patterns. lol!


> When I read Rob's work and learn from it, and make it part of my cognitive core, nobody is particularly threatened by it. When a machine does the same it feels very threatening to many people, a kind of theft by an alien creature busily consuming us all and shitting out slop.

It's not about reading. It's about output. When you start producing output in line with Rob's work that is confidently incorrect and sloppy, people will feel just as they do when LLMs produce output that is confidently incorrect and sloppy. No one is threatened if someone trains an LLM and does nothing with it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: