Artificial Intelligence
-
Ha right, and not the reality of server farms devouring tons of energy. That aspect itself will be an interesting crisis point for AI to eventually reach.
-
@Matt It's funny that you bring that up as I thought the same thing once or twice myself. I think it's a natural progression for all people to look back at the good ol' days and worry about the future. Your grandparents did it, your parents, and you...
We keep moving forward for better or worse and only time will tell which side of the equation we and our offspring are on.
-
@Matt I've gone down that road of thought before, but I always end up back at: The whole point was to give them the tools to do better than me, so they can take the keys one day.
-
has no one watched terminator or iRobot or the matrix? not sure about that last one...
-
@louisbosco hahaha, classics! and movies like those both guided the reality and set the tone for our reception of it
-
I suppose the downfall will come when we decide how much we rely on AI.. with machine learning, would they learn that we are the cause of all problems? sounds like a familiar plot to me.. haha
-
@WhiskeySandwich I find your signature equally poignant!
Mad how former emperors of Rome, and dump truck graffiti scrawlers can touch a man's soul in similar ways...
-
@popvulture said in Artificial Intelligence:
I’ve been subscribing to the mindset that it’ll be less of a job replacer and more of a job augmenter/assistant in the future. There are all kinds of things that stuff like Midjourney, Stable Diffusion, etc already do to make my life a shitload easier.
This. I (a lawyer) have been dabbling with AI for the last few weeks, and so far it's really good at some things, and then hits a wall.
For instance, if an interesting case gets a judgment, I can ask it to summarise the case, and it will do a very decent job. If I ask it to weigh the impact of that case in the broader corpus of caselaw in that area, it throws a wobbly. So far, at least, it looks like it will help us with understanding an area of law, but won't replace the ability to think creatively about how to apply that law to our clients' issues.
As a general point, I find there is an art to crafting a good prompt too. If I just say "summarise case [X]" then it does a passable but not too impressive job. But if I say "summarise the legal arguments, judicial reasoning, and broader implications of case [X]" then it is much more impressive. You basically need to treat it like an intelligent ignoramus: capable of doing a lot but unaware of any context. So if you tell it (i) what you want; (ii) why you want it; (iii) what you expect it's answer to include; and (iv) how you want it presented, it does a lot more for you than otherwise.
The scary thing is that this tech is as bad as its ever going to be right now. That chap from one of these AI companies had serious egg on his face just a couple of weeks ago when he said that he didn't think an AI would ever be able to generate video, and then OpenAI announced Sora literally the next day. I've been trying to write a Powerpoint to educate my colleagues about all this stuff and how to use it, and it's been a really hard project to complete because there's constant development and news in this field and I have to keep updating the damned thing!
-
Yep, post ChatGPT advancements in this field is measured in days unlike in decades it was before.
-
@EdH “ You basically need to treat it like an intelligent ignoramus: capable of doing a lot but unaware of any context.” This bit for sure. It’s smart as hell and can do the task, but you have to be very clear in defining the task. If you want a specific answer, ask a specific question.
-
@WhiskeySandwich said in Artificial Intelligence:
If you want a specific answer, ask a specific question.
To which I'd add, "and direct it to the resource". I've had a few examples of it referencing older law - presumably because there will be more written about old law - which has been superseded. But when I've specifically told it "make sure you refer to this" and then given it a link or copy-pasted the updated legislation into the chat box, it has been able to incorporate that into its response.
-
@EdH said in Artificial Intelligence:
So if you tell it (i) what you want; (ii) why you want it; (iii) what you expect it's answer to include; and (iv) how you want it presented, it does a lot more for you than otherwise.
I find that a tad scary in that your request predisposes the answer and effectively reinforces any preconceived notions you may have had in asking the question in the first place.
In science we are taught to avoid bias in the question at all costs. While the bias is still inevitable, the following steps (research, hypothesis, experiment, data analysis) help minimize the bias in the conclusion.
Does the scientific method or a similar construct exist in law?
-
@EdH yeah that could definitely be tricky. I find search engines doing that in general and it’s very frustrating. Especially if it finds false or speculative information, or opinions even, but it presents them as results all the same. There’s just so much junk data out there…. You have to wade thought an ocean of bs just to get a little nugget of truth or fact.
-
@goosehd yes, in a way, it’s just telling you what you want to hear. Lol
-
@goosehd I think I'm not making myself clear enough, maybe.
By "what you want" I mean what task you want it to do. So "summarise this case" or "draft me a couple of paragraphs summarising [law bit]". I personally wouldn't ask it for "give me cases that support argument X" at this stage. Doing so has landed lawyers in hot water because the AI models are not search engines and it leads to some models hallucinating cases that don't exist. (But I know that there are models under development in law that would be able to answer that query.) At the end of the day, one key piece of advice when using these models is to check everything before anything goes out.
Generally, yeah, I do find law to be a bit analogous to science:
research = research your client's facts and the relevant law
hypothesis = predict the outcome by applying the law to your client's factual matrix
experiment = take the case to court and see what happensI wouldn't say that a good lawyer is only concerned with getting "exactly what they are looking for" though. It may appear that way to the outside world, but a good lawyer should be advising their client honestly on the legal strengths and weaknesses of their position. Those conversations are privileged, of course, so don't get seen by the outside world. And a client in a legally weak position may have other options open to them, like settling the case early (if a dispute) or using other means to improve their bargaining position. I tell my clients things they don't want to hear all the time, but that very rarely means they have no options or agency in the situation at all.
-
this would be equivalent to self driving cars. who's liable?
- the self driving AI would not be legally liable
- the person that owns the car could be absolved of liabilities because he/she would technically not be in control
not taking into account, causation and chain of effect.
-
@EdH Thank you for your detailed response!!
Further question: You typically build an argument based upon existing laws and precedents and I can see you using AI to strengthen your argument, but could you also use it as a tool to poke holes in your argument and give you perspectives that you weren't anticipating.
I could see that as quite useful to prepare you and your client in any case that you were examining and to save you time and the client's money in those arguments.
-
New Open AI x Figure demo.
They've also confirmed that no parts of the videos are edited/sped up, and that all responses are generated in real time, with no pre-programming.
What gets me though, is the human-like vocal quirks! The uhms, and ahhs really anthropomorphise the machine in my mind...