Hacker Newsnew | past | comments | ask | show | jobs | submit | Sevii's commentslogin

He found a way to charge people for open source

How is this not effectively a ban on representing yourself in court? The lawyers and judge are going to be using AI. But the layman isn't allowed to use it?

Its no different then if you ask a friend (who is not your lawyer) for advice. You can ask anything you want, it just only gets the special protection if it is actually your lawyer.

The AI is a glorified search engine, not a human!

And?

This is exactly what it is. I know someone that's essentially representing themselves in family court. They had attorneys but the attorneys are basically useless for you if the opposition has more money and can spam you with motions that they are using AI to generate. which you then need to pay a lawyer to respond to. They since began representing themselves due to lack of money, and lawyer incompetence, and actually started to shut down the opposition... then the judge threatened contempt of court and jail time during one hearing if they chose to continue to make a statement and not accept a court appointed attorney to speak for them. Family court in the US is an absolute farce. The same judge recently started asking about "chatGPT" and mentioning that anything there would need to be disclosed to the courts. The person I know was primarily using their own local machine and models, however.

It's just not family courts. Judges absolutely loathe anyone who appears without counsel, mostly because they've been burned with too many sovcits and other nonsense that jams up the systems. So, even if you are competent they will try everything they can to shut you down and won't give you much time of day versus the party with the lawyers.

There is also the possibility that the person in question aggravated the judge by acting in a bonkers way. I have totally seen that - someone not understanding rules and procedures making all the wrong moves and then framing themselves as unfair victim.

So, how would it apply to web searches? If a lawyer searches something for a person's case, is it protected? If a person searches something for their own case, does it have a similar level of protection? Seems AI chats would need to follow the same rules.

>>"An attorney who represents himself in court has a fool for a client."

I'm in a years-long lawsuit in my state's small claims sessions court (as plaintiff, jurisdictional $25k limit). It's petty and essentially just two old men yelling at clouds ["on principle"]. Nobody is in a hurry, and the timeline has roughly followed ChatGPT's rollout (albeit completely unrelated) – the tech just keeps jaw-dropping.

What started as a years-long disagreement, eventually became a small claims lawsuit pro se AI, counterclaims/insanity/&all... and now we both have attorneys representing our interests ($$$$$).

I still use a local (offline) LLM to field my rudimentary legalese into better questions for my human attorney (which saves a litle $$). Together we three have squashed all counterclaims, including a counter-lawsuit (that probably I could have managed with AI, alone, but was much more natural/comfortable not representing myself). Very grateful for both my attorney and accountant (as a blue-collar electrician).

My hope going forward judicially is that some sort of amalgamated lawAI platform can better increase access for laypeople to our already-overwhelmed judgeships, like SCOTUS Roberts wrote about in his end of 2023 Judicial Review. There needs to be attorney-client privileges extended to LLMs, definitely achievable offline (until inevitable IT fail/hack).


NYS is busy trying to cozy up to those that own the state and make it illegal for AI services to give legal opinions to non lawyers.

I specifically chose my current human lawyer because he is a sciFi geek (we've now both read the same trilogy, rife with AI/bots/betrayal) and was receptive to me using offline LLMs to better-understand myself and my case.

It is wise though that courts somehow prevent lawyers from entirely-relying-upon the output of chatbots.


I think this means that if lawyers use it, they have also lost confidentiality. That could be a significant issue in a big case.

[Edit: Or maybe not, legally. But they have definitely lost confidentiality in the "corporate secrets" sense, and that may still matter.]


If lawyers use it, they may have the ability to claim work product exemption, although this itself is going to be dependent on a lot more factors I can't analyze.

This is really the question. Conversely, why would an attorney get to have privilege over chatbot interactions in a manner that an individual using a chatbot for self-defense not have such privilege?

While it's true that 'figuring out what exactly needs to be programmed' was always the hard part. It's not the part that the most money was spent on. Actually programming the thing always took up the most time and money.

True enough, but I think that a lot of "actually programming the thing" turned out to be "figuring out what exactly needs to be programmed". Afterwards, people did not want to admit that this was the case, perhaps even to themselves, because it seemed like a failure to plan. However, in most (nearly all?) cases, spending more time prior to programming would not have resulted in a better result. Usually, the best way to figure out what needs to be programmed, is to start doing it, and occasionally take a step back to evaluate what you've learned about the problem space and how that changes what you want to actually program.

In other words "figuring out what needs to be programmed" and "actually programming the thing" look the same while they're happening. Afterwards, one could say that the first 90% was figuring out, and only the last 10% was actually doing it. The reason the distinction matters, is that if you do something that makes programming happen faster, but figuring out happen slower, then it can have the surprising affect of making it take longer to get the whole thing done.


> Usually, the best way to figure out what needs to be programmed, is to start doing it, and occasionally take a step back to evaluate what you've learned about the problem space and how that changes what you want to actually program.

Replace the verb "program" with "do" or anything else, and you've got a profound universal philosophical insight right there


I'm curious how this would work with LLMs increasing the speed to prototype. Low stakes changes to try something out, learn from it, and pivot.

My company is fully remote so all meetings are virtual and can be set to have transcripts, parsing through that for the changes needed and trying it out can be a simple as copy-paste, plan, verify, execute, and distribute.


> Actually programming the thing always took up the most time and money.

I'm curious is any quantitative research has been done comparing time writing code vs time gathering and understanding requirements, documenting, coordinating efforts across developers, design and architecture, etc.


Continuous delivery really killed QA.

Adsense is designed to have as many footguns as possible.


Nope its totally dead


The problem is that health insurance companies squander immense amounts of money on adjudicating claims. Huge amounts of GDP are spent on fights between insurers and providers over what is covered.


You can deduce that cannot be true using the medical loss ratios, which is money flowing out to healthcare providers. At roughly 85% or so, that means 15% is left for the entirety of the rest of the business, including adjudication.

https://www.kff.org/private-insurance/medical-loss-ratio-reb...

https://www.oliverwyman.com/our-expertise/insights/2023/mar/...

That is not to say the adjudication process is done well. In fact, it is hugely wasteful, either intentionally or unintentionally, and the problem is that the government does not audit the insurance companies often enough, nor does it levy penalties sufficient to incentivize proper and efficient adjudication.

The government should be doing constant random checks on claims to see if they were processed and adjudicated in a timely and efficient manner with a sufficiently low error rate on behalf of the adjudicators, and the government is basically doing none of that.


A lot of it seems to be porting open source projects to rust for other open source projects to consume.


AI providers can only charge what the market can bear. AI isn't worth 20k/month for 'PHD' level work. But people are willing to pay for several $200/month subscriptions.

But fundamentally AI compute is a commodity. GPUs are made in factories at scale. Assuming AI quality tapers off eventually supply will catch up to demand.

Finally open weights models are good enough that the leading labs cannot charge high margins.


Works for me with a Pro sub at https://gemini.google.com/app


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: