AI

74 posts / 0 new
Last post
Michael Moriarity

Here's an amusing example of the limits of large language models. To understand the joke, you need to realize how security vulnerabilities in software are exploited. The system under attack always takes some sort of input, whether text, pictures, structured data or something else. The attacker attempts to discover some input which was not anticipated by the programmer, and which will make the program fail in a way that leads to control by the attacker. So, the essence of secure programming is to make damn sure that in all the places where input you don't control is processed, that processing is done very carefully so as not to fall for such an attack.

The Register has an article up about a team of researchers at Université du Québec who used ChatGPT to generate a bunch of programs (which seems to be a common use these days). They then tested the 21 programs generated for security against a specific known vulnerability. Only 5 were secure. When they asked ChatGPT whether these programs were insecure, it admitted that they were, but only when asked specifically. The punch line is this quote:

Thomas Claburn wrote:
The academics observe in their paper that part of the problem appears to arise from ChatGPT not assuming an adversarial model of code execution. The model, they say, "repeatedly informed us that security problems can be circumvented simply by 'not feeding an invalid input' to the vulnerable program it has created."

Michael Moriarity

Here's another one. In this case, lawyers used ChatGPT to prepare a legal brief in support of a motion before a judge in the federal court for the Southern District of New York. The brief contained at least six fictional cases as references, complete with proper citations and non-existent quotes. Very funny and very embarrassing, but not surprising.

6079_Smith_W

This just in. Robot overlords.

https://www.bbc.com/news/uk-65746524

I am wondering if it isn't just promo for the new Dune movie. Those who have read the book will understand.

NDPP

AI Poses 'Risk of Extinction' Industry Leaders Warn

https://twitter.com/nytimes/status/1663640241838718977

"Executives from leading artificial intelligence companies like Open AI and Google, have signed an open letter warning of the risks of the AI there were building.

Industry leaders warn that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars."

Michael Moriarity

AI isn't the threat, it's the capitalism that drives the development of these technologies that is the threat. Capitalism is much more likely to result in the extinction of humanity than any technological development. These geeks working on AI have their heads so far up their asses that they can't see the real world, just the make-believe one they live in.

Michael Moriarity

Here's a very good article by a professor of computational linguistics explaining why LLMs by design can not possibly understand the output that they create. The opening paragraph:

Emily M. Bender wrote:

With the advent of ChatGPT, large language models (LLMs) went from a relatively niche topic to something that many, many people have been exposed to. ChatGPT is presented as an entertaining system to chat with, a dialogue partner, and (through Bing) a search interface.* But fundamentally, it is a language model, that is, a system trained to produce likely sequences of words based on the distributions in its training data. Because it models those distributions very closely, it is good at spitting out plausible sounding text, in different styles. But, as always, if this text makes sense it’s because we, the reader, are making sense of it.

6079_Smith_W

There are a number of episodes of Black Mirror that touch on this theme.

Metalhead is the best, IMO. And entirely plausible as it isn't really about AI.

Michael Moriarity

I've recently learned about some developments at M.I.T. that will probably transform the field of machine learning. The new technique is called liquid neural networks. The inventors thought that since simple animals like the worm C. Elegans could do some pretty impressive computing with only about 300 neurons, it should be possible to create a virtual neural network that could work with far fewer virtual neurons than current machine learning models, which have millions or even billions.

They did this by changing 2 major things in the design of the models. First, they made the internal processing of each neuron much more complex than in current models. What is currently a simple algebraic expression, becomes a differential equation, that needs to be solved each time new data is input. Secondly, they made the strengths of the synapses which connect the neurons variable at run time, whereas in current models, these strengths are set once and for all during training.

By doing these things, they were able to create an autonomous car driving system which performs better than most existing ones, but with only 19 virtual neurons. They extended this to 3 dimensions to create a network of 29 neurons which can allow a drone to follow a target around at a set distance in previously unknown terrain.

Here is a nice video of the development team discussing their creation, and here is an article about it in IEEE Spectrum. Keep an eye on this one. In my opinion, this will change everything, and in a few months you won't hear about anything else.

Michael Moriarity

Here's another episode in the ChatGPT comedy of errors.

Thomas Claburn wrote:

ChatGPT, OpenAI's fabulating chatbot, produces wrong answers to software programming questions more than half the time, according to a study from Purdue University. That said, the bot was convincing enough to fool a third of participants.

The Purdue team analyzed ChatGPT’s answers to 517 Stack Overflow questions to assess the correctness, consistency, comprehensiveness, and conciseness of ChatGPT’s answers. The US academics also conducted linguistic and sentiment analysis of the answers, and questioned a dozen volunteer participants on the results generated by the model.

"Our analysis shows that 52 percent of ChatGPT answers are incorrect and 77 percent are verbose," the team's paper concluded. "Nonetheless, ChatGPT answers are still preferred 39.34 percent of the time due to their comprehensiveness and well-articulated language style." Among the set of preferred ChatGPT answers, 77 percent were wrong.

Michael Moriarity

Microsoft's Travel AI, based on the same technology as ChatGPT, is reccommending the Ottawa Food Bank as a "cannot miss" tourist destination.

NDPP

AI: The New Arms Race (& vid)

https://www.rt.com/shows/modus-operandi/582738-andy-mok-ai-chat-gpt/

"...The MO's Manilla Chan speaks to international relations expert Andy Mok about the rise of AI and how governments are responding to the emergence of systems like ChatGPT."

Michael Moriarity

Michael Moriarity

Wow, I just clicked on a YouTube video, and got a commercial, as often happens. But this ad was a deepfake of Justin Trudeau pitching a get rich quick scheme. He promised to pay $100K out of his personal fortune of over $200B to anyone who failed to make $30K in their first month. I have seen plenty of other obvious scams advertised on YouTube, but it surprised me that they would actually air this one. I wonder if the media will notice?

Mobo2000

I often get deepfakes of Elon Musk promising a new tech venture too.

Michael Moriarity

Two more big fails for LLMs such as ChatGPT.

First, Michael Cohen, Trump's old lawyer, has passed on to his own lawyer fictitious legal citations created by Google Bard, which were then filed in court. Cohen is now begging for forgiveness, rather like the unfortunate attorney in post 53.

Second, a study by pediatricians in New York shows that ChatGPT is extremely poor at diagnosing case studies of ill children.

This technology is not going to take over the world, despite all the hype from the tech bros.

Michael Moriarity

Open AI, the company that makes ChatGPT, has revealed its latest generative AI system. It is called Sora, and it makes short, photo-realistic videos based on text prompts. I find it extremely impressive, especially considering that it has no idea what the content is of the videos it creates. Here is a good video that shows some of the samples, and discusses the limitations of the system.

Michael Moriarity

Here is yet another indication that the current type of AI systems, Large Language Models, will never be reliable enough to be depended upon for crucial answers. New York City has released a beta of its MyCity chatbot, which is intended to provide information about city laws and regulations. Unfortunately, it has been giving out obviously wrong advice. For example, it has said that tenants can not be evicted for failure to pay rent. Alas, although it probably should be true, it is not. One question was asked multiple times, and the bot gave the correct answer once, but wrong answers ten times.

This is going to continue until someone is foolish enough to use an LLM to handle life and death questions. Soon after that, a wrong answer will result in hundreds or thousands of deaths. At that point, some politicians may realize that this whole technology is really only good for parlour games and making billions for tech companies.

Michael Moriarity

In another insane development, Microsoft and Open AI, the makers of ChatGPT, are planning a 100 billion dollar super computer to crank out even more probably wrong answers. I haven't read the whole article because of the paywall, but just imagine the good that could be done with $100B. Free higher education for every U.S. citizen, or ending homelessness. Today's capitalists are more loony than ever, and this is one big manifestation of their madness.

ryanw

my friend is an artist/animator and they've been out of work for 6 months. There's no commercial appetite for very nice content anymore, when 'ok' AI stuff is available and it's free and you own the content in perpituity. All the bread & butter projects are going extinct. The savings for film and television are immense. Large production companies that were expanding have halted construction and shelved 90% of future plans while exploring AI. 

Maybe it doesn't answer questions right(currently); but it can replicate scenery backdrops, or the curvature of facial features and textures. Some good diploma mills in the fine arts, and it's not even their fault.

Michael Moriarity

Completely agree, ryanw. AI doesn't have to be very good to cause a lot of harm in late stage capitalism. Computer produced crap can replace human craftsmanship because the owners don't care what sort of slop they serve the public or how many people they make "redundant" as long as they make profits.

NDPP

Report: Israel Used AI To Identify Bombing Targets in Gaza

https://www.theverge.com/2024/4/4/24120352/israel-lavender-artificial-in...

"Lavender, an artificial intelligence tool developed for the war, marked 37,000 Palestinians as suspected Hamas operatives and authorized their assassinations..."

AI as death-machine.

Michael Moriarity

The next step will be to take humans completely out of the loop. Then an LLM will be able to "hallucinate" a threat for some random reason, and then kill a randomly selected person. Pretty grim stuff.

NDPP

More:

Lavender & 'Where's Daddy?'

https://www.youtube.com/watch?v=4RmNJH4UN3s

Democracy Now: "How Israel used AI to form kill-lists and bomb Palestinians in their homes."

Canada supports.

Pages