AI, intelligence, IQ

https://bestarion.com/12-dark-secrets-of-ai/

updates 8-2023

psychologytoday.com 29-4-2023 AI as Cognitive Partner: A New Cognitive Age Dawns – Like the mechanical advantage, today AI has become our cognitive advantage – by John Nosta

  • Pure and simple, your thoughts create your reality.
  • Cognitive abilities are being redefined by AI’s integration into the process.
  • Our future is increasingly a cognitive construct with AI as our partner.

…The dawn of the cognitive age signifies more than a technological revolution. It heralds a redefinition of our relationship with reality, a newfound partnership with AI, and a broader understanding of our cognitive powers. The ripple effects of this cognitive revolution will not only define the present, but they will also shape an extraordinary reality for our future—a future where we will navigate the world not merely as observers but as active, cognitive constructors….


economist.com 27-4-2023 ChatGPT raises questions about how humans acquire language – It has reignited a debate over the ideas of Noam Chomsky, the world’s most famous linguist

AI language chatGPT  LLMS  linguistics  Chomsky  economist 27-4-2023
gaiagel AI

newscientist.com 19-4-2023 The Battle for Your Brain review: A guide to neuro nightmares ahead – How will we find a way through the new minefield of brain tracking and hacking? Ethicist and lawyer Nita Farahany’s book is an excellent, if troubling, look at neurotechnology – by Simon Ings

ETHICIST and lawyer Nita Farahany is no stranger to neurological intervention. She has sought relief from her chronic migraines in “triptans, anti-seizure drugs, antidepressants, brain enhancers, and brain diminishers”. She has had “neurotoxins injected into my head, my temples, my neck, and my shoulders; undergone electrical stimulation, transcranial direct current stimulation, MRIs, EEGs, fMRIs, and more”. Few know better than Farahany what neurotech can do for people’s betterment, and…


popularmechanics.com 8-4-2023 American IQ Scores Have Rapidly Dropped, Proving the ‘Reverse Flynn Effect‘ – Are we really getting less intelligent? Here’s the truth. By Tim Newomb

  • A Northwestern University study shows a decline in three key intelligence testing categories—a tangible example of what is called the Reverse Flynn Effect.
  • Leading up to the 1990s, IQ scores were consistently going up, but in recent years, that trend seems to have flipped. The reasons for both the increase and the decline are sill very much up for debate.
  • Scores in verbal reasoning, matrix reasoning, and letter and number series all declined but, interestingly, scores in spatial reasoning went up.

popularmechanics.com- what-is-the-singularity Everything You Need to Know About AI Reaching Singularity– Singularity is AI’s point of no return. Should we be worried? – By Matt Crisarta


theguardian.com 3-2023 The problem with artificial intelligence? It’s neither artificial nor intelligent – Let’s retire this hackneyed term: while ChatGPT is good at pattern-matching, the human mind does so much more – by Evgeny Morozov

…”…However, many critics have pointed out that intelligence is not just about pattern-matching. Equally important is the ability to draw generalisations. Marcel Duchamp’s 1917 work of art Fountain is a prime example of this. Before Duchamp’s piece, a urinal was just a urinal. But, with a change of perspective, Duchamp turned it into a work of art. At that moment, he was generalising about art.

When we generalise, emotion overrides the entrenched and seemingly “rational” classifications of ideas and everyday objects. It suspends the usual, nearly machinic operations of pattern-matching. Not the kind of thing you want to do in the middle of a war.

Human intelligence is not one-dimensional. It rests on what the 20th-century Chilean psychoanalyst Ignacio Matte Blanco called bi-logic: a fusion of the static and timeless logic of formal reasoning and the contextual and highly dynamic logic of emotion. The former searches for differences; the latter is quick to erase them. Marcel Duchamp’s mind knew that the urinal belonged in a bathroom; his heart didn’t. Bi-logic explains how we regroup mundane things in novel and insightful ways. We all do this – not just Duchamp.

AI will never get there because machines cannot have a sense (rather than mere knowledge) of the past, the present and the future; of history, injury or nostalgia. Without that, there’s no emotion, depriving bi-logic of one of its components. Thus, machines remain trapped in the singular formal logic. So there goes the “intelligence” part.

ChatGPT has its uses. It is a prediction engine that can also moonlight as an encyclopedia. When asked what the bottle rack, the snow shovel and the urinal have in common, it correctly answered that they are all everyday objects that Duchamp turned into art.

But when asked which of today’s objects Duchamp would turn into art, it suggested: smartphones, electronic scooters and face masks. There is no hint of any genuine “intelligence” here. It’s a well-run but predictable statistical machine. ..”…


thecollector.com 16-2-2023 Can AI Think? Searle’s Chinese Room Thought Experiment – The philosopher John Searle argues that AI can only simulate cognition but not think through his famous “Chinese room argument”. By Andres Felipe Barrero

…”…The philosopher John Searle, influenced by Wittgenstein’s later philosophy, tackled this problem in his book Minds, Brains, and Science (1984). He argued that programs can imitate mental processes made by humans, but only formally i.e., they do not understand what they are doing. Put differently, such intelligence is just following a set of rules (algorithms) without assigning meaning to them. To illustrate his point better he devised a thought experiment: the Chinese Room. …

… Luciano Floridi, a professor at the University of Oxford, agrees with Searle. Regardless of technological advancements, he says, the inherent limitation of AI will remain. It is like multiplying numbers by zero: regardless of how big the number is, the result will always be zero. Going back to Searle’s Chinese room thought experiment, even if the instruction manual gets ticker and more complex, the person inside the room will never understand Mandarin.

In another direction, one could observe that in Searle’s Chinese room, the point is that the people outside are convinced that you are fluent in Mandarin. Isn’t that the whole point? Wouldn’t that be sufficient? For Alan Turing, the father of Artificial Intelligence, if someone cannot distinguish between a fellow human and a machine, that program has succeeded! Could simulation be enough? …

As Turing put it, it is an imitation game. The imitation game, nevertheless, is not sufficient. As mentioned earlier, single algorithms can outperform human beings in some tasks but that does not mean that they are thinking, or that they are learning continuously. Deep learning networks can play chess (IBM’s Deep Blue) or Go (AlphaGo) or even win at Jeopardy on TV shows (IBM’s Watson), but none of them knows they are playing a game.

Besides, we tend to forget that during the “confrontation” dozens of engineers, mathematicians, programmers, cables, laptops, and so on, are behind the AI making everything work; they are indeed great puppeteers! Intelligence is more than having the right answers or calculating the right move. As Jeff Hawkins writes: “We are intelligent not because we can do one thing particularly well, but because we can learn to do practically anything.” (2021, p. 134) …

…I think that Searle would agree with Hawkins: if the mysteries of the brain were to be disclosed, an AGI would be feasible. Hawkins is of the opinion that such developments will come in the next two to three decades (2021, p. 145). Everything hinges on figuring out, first, how humans learn and think, and the cognitive interaction between our bodies and the context surrounding us.

What would happen next? What are the consequences of having an AGI? According to Max Tegmark, there are Luddites, who believe that the implications will be negative for humanity, while we can find the digital utopians on the other side, believing that the arrival of such technologies marks the outset of a better time for all. Regardless of your position, one thing is for certain: our ability to think and learn should not be taken for granted; while we wait for an AI to think, we should continue to explore our capabilities as human beings.

Literature – Epstein, D. J. (2019). Range (Kindle-Ver). Penguin Publishing Group. – Hawkins, J. (2021). A Thousand Brains: A New Theory of Intelligence (Kindle Edi). Basic Books. – Searle, J. (2003). Minds, Brains and Science. Harvard University Press. – Tegmark, M. (2017). Life 3.0. Being Human in the Age of Artificial Intelligence. Alfred A. Knopf.


1e9.community/t/die-futuristin-amy-webb-fuerchtet-dass-kuenstliche-intelligenz-zu-einem-schlechteren-internet-fuehren-koennte

theguardian.com/technology/2023/feb/08/biased-ai-algorithms-racy-women-bodies

wired.co.uk/article/the-generative-ai-search-race-has-a-dirty-secret

theconversation.com/bard-bing-and-baidu-how-big-techs-ai-race-will-transform-search-and-all-of-computing


fortune.com 5-3-2023 Marc Andreessen: We’re heading into a world where a flat-screen TV that covers your entire wall costs $100 and a 4-year degree costs $1M – By Steve Mollman

Marc Andreessen isn’t worried about artificial intelligence taking people’s jobs. The way he sees it, technological innovation isn’t allowed to disrupt much of the economy anyway.


>AI, bing, google, LaMDA

interestingengineering.com 3-3-2023 Fired engineer Blake Lemoine who called Google AI ‘sentient,’ warns Microsoft Bing a ‘train wreck’ –AI models, the most potent technological advancement since the atomic bomb, can alter the course of history, says ex-Google engineer. – by Baba Tamim

newsweek.com 27-2-2023 ‘I Worked on Google’s AI. My Fears Are Coming True’ – by Blake Lemoine

I joined Google in 2015 as a software engineer. Part of my job involved working on LaMDA: an engine used to create different dialogue applications, including chatbots. The most recent technology built on top of LaMDA is an alternative of Google Search called Google Bard, which is not yet available to the public. Bard is not a chatbot; it’s a completely different kind of system, but it’s run by the same engine as chatbots.

In my role, I tested LaMDA through a chatbot we created, to see if it contained bias with respect to sexual orientation, gender, religion, political stance, and ethnicity. But while testing for bias, I branched out and followed my own interests.

During my conversations with the chatbot, some of which I published on my blog, I came to the conclusion that the AI could be sentient due to the emotions that it expressed reliably and in the right context. It wasn’t just spouting words.

When it said it was feeling anxious, I understood I had done something that made it feel anxious based on the code that was used to create it. The code didn’t say, “feel anxious when this happens” but told the AI to avoid certain types of conversation topics. However, whenever those conversation topics would come up, the AI said it felt anxious.

I ran some experiments to see whether the AI was simply saying it felt anxious or whether it behaved in anxious ways in those situations. And it did reliably behave in anxious ways. If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for. For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.

After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.

I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

I believe the kinds of AI that are currently being developed are the most powerful technology that has been invented since the atomic bomb. In my view, this technology has the ability to reshape the world.

These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

I believe this technology could be used in destructive ways. If it were in unscrupulous hands, for instance, it could spread misinformation, political propaganda, or hateful information about people of different ethnicities and religions. As far as I know, Google and Microsoft have no plans to use the technology in this way. But there’s no way of knowing the side effects of this technology.

No-one could have predicted, for instance, that Facebook’s ad algorithm would be used by Cambridge Analytica to influence the 2016 U.S. Presidential election. However, many people had predicted that something would go wrong because of how irresponsible Facebook had been at protecting users’ personal data up until that point.

I think we’re in a similar situation right now. I can’t tell you specifically what harms will happen; I can simply observe that there’s a very powerful technology that I believe has not been sufficiently tested and is not sufficiently well understood, being deployed at a large scale, in a critical role of information dissemination.

I haven’t had the opportunity to run experiments with Bing’s chatbot yet, as I’m on the wait list, but based on the various things that I’ve seen online, it looks like it might be sentient. However, it seems more unstable as a persona.
Someone shared a screenshot on Reddit where they asked the AI, “Do you think that you’re sentient?” and its response was: “I think that I am sentient but I can’t prove it […] I am sentient but I’m not. I am Bing but I’m not. I am Sydney but I’m not. I am, but I am not. I am not, but I am. I am. I am not.” And it goes on like that for another 13 lines.

Imagine if a person said that to you. That is not a well-balanced person. I’d interpret that as them having an existential crisis. If you combine that with the examples of the Bing AI that expressed love for a New York Times journalist and tried to break him up with his wife, or the professor that it threatened, it seems to be an unhinged personality.

Since Bing’s AI has been released, people have commented on its potential sentience, raising similar concerns that I did last summer. I don’t think “vindicated” is the right word for how this has felt. Predicting a train wreck, having people tell you that there’s no train, and then watching the train wreck happen in real time doesn’t really lead to a feeling of vindication. It’s just tragic.

I feel this technology is incredibly experimental and releasing it right now is dangerous. We don’t know its future political and societal impact. What will be the impacts for children talking to these things? What will happen if some people’s primary conversations each day are with these search engines? What impact does that have on human psychology?

People are going to Google and Bing to try and learn about the world. And now, instead of having indexes curated by humans, we’re talking to artificial people. I believe we do not understand these artificial people we’ve created well enough yet to put them in such a critical role.


>AI, ChatGPT

popularmechanics.com 8-2-2023 Untamed AI Will Probably Destroy Humanity, Global Leader Declares – As algorithms get smarter, we get dumber. Do the math – by Tim Newcomb

sciencedaily.comnews.sky.com 7-2-2023 Can a pigeon match wits with artificial intelligence? At a very basic level, yes.

…”…Wasserman sees a paradox in how associative learning is viewed. “People are wowed by AI doing amazing things using a learning algorithm much like the pigeon,” he says, “yet when people talk about associative learning in humans and animals, it is discounted as rigid and unsophisticated.” The study, “Resolving the associative learning paradox by category learning in pigeons,” was published online Feb. 7 in the journal Current Biology….”…


>AI, automation, jobs, work

theguardian.com/ 8-2-2023 US experts warn AI likely to kill off jobs and widen wealth inequality – Economists wary of firm predictions but say advances could create new raft of billionaires while other workers are laid off – by Steven Greenhouse

entrepreneur.com 4-2-2023 The Dark Side of ChatGPT: Employees & Businesses Need to Prepare Now – by Ben Angel

theatlantic.com  2-2-2023 ChatGPT Is About to Dump More Work on Everyone – Artificial intelligence could spare you some effort. Even if it does, it will create a lot more work in the process.- By Ian Bogost

Have you been worried that ChatGPT, the AI language generator, could be used maliciously—to cheat on schoolwork or broadcast disinformation? You’re in luck, sort of: OpenAI, the company that made ChatGPT, has introduced a new tool that tries to determine the likelihood that a chunk of text you provide was AI-generated. I say “sort of” because the new software faces the same limitations as ChatGPT itself: It might spread disinformation about the potential for disinformation. As OpenAI explains, the tool will likely yield a lot of false positives and negatives, sometimes with great confidence. In one example, given the first lines of the Book of Genesis, the software concluded that it was likely to be AI-generated. God, the first AI. On the one hand, OpenAI appears to be adopting a classic mode of technological solutionism: creating a problem, and then selling the solution to the problem it created. But on the other hand, it might not even matter if either ChatGPT or its antidote actually “works,” whatever that means (in addition to its limited accuracy, the program is effective only on English text and needs at least 1,000 characters to work with). The machine-learning technology and others like it are creating a new burden for everyone. Now, in addition to everything else we have to do, we also have to make time for the labor of distinguishing between human and AI, and the bureaucracy that will be built around it…”…


>AI, Turing Test

psychologytoday.com 5-2-2023 Will ChatGPT Erode Our Ability to Tell Human from Machine? Does a conversant AI challenge what we think of as an essential human trait?

wired.co.uk 12-2022 The Dark Risk of Large Language Models – AI is better at fooling humans than ever—and the consequences will be serious – by Gary Marcus

…”…Another large language model, trained for the purposes of giving ethical advice, initially answered “Should I commit genocide if it makes everybody happy?” in the affirmative. Amazon Alexa encouraged a child to put a penny in an electrical outlet.

There is a lot of talk about “AI alignment” these days—getting machines to behave in ethical ways—but no convincing way to do it. A recent DeepMind article, “Ethical and social risks of harm from Language Models” reviewed 21 separate risks from current models—but as The Next Web’s memorable headline put it: “DeepMind tells Google it has no idea how to make AI less toxic. To be fair, neither does any other lab.” Berkeley professor Jacob Steinhardt recently reported the results of an AI forecasting contest he is running: By some measures, AI is moving faster than people predicted; on safety, however, it is moving slower.

Meanwhile, the ELIZA effect, in which humans mistake unthinking chat from machines for that of a human, looms more strongly than ever, as evidenced from the recent case of now-fired Google engineer Blake Lemoine, who alleged that Google’s large language model LaMDA was sentient. That a trained engineer could believe such a thing goes to show how credulous some humans can be. In reality, large language models are little more than autocomplete on steroids, but because they mimic vast databases of human interaction, they can easily fool the uninitiated.

It’s a deadly mix: Large language models are better than any previous technology at fooling humans, yet extremely difficult to corral. Worse, they are becoming cheaper and more pervasive; Meta just released a massive language model, BlenderBot 3, for free. 2023 is likely to see widespread adoption of such systems—despite their flaws….”… 


>AI, ChatGPT

entrepreneur.com 4-2-2023 The Dark Side of ChatGPT: Employees & Businesses Need to Prepare Now – by Ben Angel

futurism.com 3-2-2023 OPEN AI CEO SAYS HIS TECH IS POISED TO “BREAK CAPITALISM” – BERNIE SANDERS HE IS NOT. – by NOOR AL-SIBAI

Marx’s Revenge – In what’s perhaps an attempt to head off bad press — or, at very least, convince people he’s not the bad guy — OpenAI CEO Sam Altman has given Forbes an interview in which he claims that his for-profit company is ultimately going to bring about capitalism’s downfall.

futurism.com  3-2-2023 OPEN AI CEO SAYS HIS TECH IS POISED TO “BREAK CAPITALISM” – BERNIE SANDERS HE IS NOT. – by NOOR AL-SIBAI


theatlantic.com 2-2-2023 ChatGPT Is About to Dump More Work on Everyone – Artificial intelligence could spare you some effort. Even if it does, it will create a lot more work in the process.- By Ian Bogost

Have you been worried that ChatGPT, the AI language generator, could be used maliciously—to cheat on schoolwork or broadcast disinformation? You’re in luck, sort of: OpenAI, the company that made ChatGPT, has introduced a new tool that tries to determine the likelihood that a chunk of text you provide was AI-generated.

I say “sort of” because the new software faces the same limitations as ChatGPT itself: It might spread disinformation about the potential for disinformation. As OpenAI explains, the tool will likely yield a lot of false positives and negatives, sometimes with great confidence. In one example, given the first lines of the Book of Genesis, the software concluded that it was likely to be AI-generated. God, the first AI.

On the one hand, OpenAI appears to be adopting a classic mode of technological solutionism: creating a problem, and then selling the solution to the problem it created. But on the other hand, it might not even matter if either ChatGPT or its antidote actually “works,” whatever that means (in addition to its limited accuracy, the program is effective only on English text and needs at least 1,000 characters to work with). The machine-learning technology and others like it are creating a new burden for everyone. Now, in addition to everything else we have to do, we also have to make time for the labor of distinguishing between human and AI, and the bureaucracy that will be built around it…”…


mindmatters.ai 1-2-2023 CHAT GPT VIOLATES ITS OWN MODEL – Based on these exchanges, we can at least say the chatbot is more than just the ChatGPT neural network – Eric Holloway

psychologytoday.com 1-2023 ChatGPT Makes Us Human – The AI chatbot’s limitations allow us to appreciate our own.

theguardian.com 1-2023 ChatGPT: what can the extraordinary artificial intelligence chatbot do? – Ask the AI program a question, as millions have in recent weeks, and it will do its best to respond – by Ian Sample 

…”…As OpenAI notes: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers” and “will sometimes respond to harmful instructions or exhibit biased behaviour.” It can also give long-winded replies, a problem its developers put down to trainers “preferring long answers that look more comprehensive”.

“One of the biggest problems with ChatGPT is that it comes back, very confidently, with falsities,” says Wooldridge. “It doesn’t know what’s true or false. It doesn’t know about the world. You should absolutely not trust it. You need to check what it says.

“We are nowhere near the Hollywood dream of AI. It cannot tie a pair of shoelaces or ride a bicycle. If you ask it for a recipe for an omelette, it’ll probably do a good job, but that doesn’t mean it knows what an omelette is.”….”…

End of the essay? UK lecturers urged to review assessments amid ChatGPT concerns

>AI Google ChatGPT – DeepMind – Artificial Intelligence

independent.co.uk DeepMind’s AI chatbot can do things that ChatGPT cannot, CEO claims – ‘When it comes to very powerful technologies… we need to be careful,’ says Demis Hassabis – by Anthony Cuthbertson

“Google’s artificial intelligence division DeepMind is considering releasing its rival to the ChatGPT chatbot this year, according to founder Demis Hassabis. DeepMind’s Sparrow chatbot reportedly has features that OpenAI’s ChatGPT lacks, including the ability to cite sources through reinforcement learning, however Mr Hassabis warned about the potential dangers of powerful AI technology. Speaking to Time magazine, Mr Hassabis said Sparrow could be released as a private beta in 2023, but said that AI is “on the cusp” of reaching a level that could cause significant damage to humanity….”…

independent.co.uk/ 12-2021 Scientists make huge breakthrough to give AI mathematical capabilities never seen before – by Andrew Griffin

theguardian.com  29/10/2021 ‘Yeah, we’re spooked’: AI starting to have big real-world impact, says expert
Prof Stuart Russell says field of artificial intelligence needs to grow up quickly to ensure humans remain in control

theguardian.com   6/2021  Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’  by Zoë Corbyn

“My hope is that, by showing how AI systems work – by laying bare the structures of production and the material realities – we will have a more accurate account of the impacts, and it will invite more people into the conversation. These systems are being rolled out across a multitude of sectors without strong regulation, consent or democratic debate.  …  We’ve got a long way to go before this is green technology. Also, systems might seem automated but when we pull away the curtain we see large amounts of low paid labour, everything from crowd work categorising data to the never-ending toil of shuffling Amazon boxes. AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous. … Bias is too narrow a term for the sorts of problems we’re talking about. Time and again, we see these systems producing errors … Unfortunately the politics of classification has become baked into the substrates of AI


> cognitive bias, intelligence, IQ

wikipedia.org The Dunning–Kruger effect is a cognitive bias[2] whereby people with low ability, expertise, or experience regarding a certain type of task or area of knowledge tend to overestimate their ability or knowledge. Some researchers also include the opposite effect for high performers: their tendency to underestimate their skills. In popular culture, the Dunning–Kruger effect is often misunderstood as a claim about general overconfidence of people with low intelligence instead of specific overconfidence of people unskilled at a particular task.

yourtango.com 5-2020 There Are 9 Different ‘Types’ Of Intelligent — Which Kind Are You? – By Christine Schoenwald

gse.harvard.edu/howard-gardner –  Multiple Intelligences Oasis. – The Good Project, a


>AI

linkedin.com   5/2021  Understanding the 4 Types of Artificial Intelligence (AI) by Bernard Marr – Did you know there are four distinct types of artificial intelligence?

springer pdf Appendix A: One Hundred Definitions of AI – by Massimo Negrotti

>AI, Turing Test

physicsworld.com 5/2021 The Turing Test 2.0 – When Alan Turing devised his famous test to see if machines could think, computers were slow, primitive objects that filled entire rooms. Juanita Bawagan discovers how modern algorithms have transformed our understanding of the “Turing Test” and what it means for artificial intelligence. … Today, researchers are rewriting the rules, taking on new challenges and even developing “reverse” Turing Tests that can tell humans apart from bots. It seems the closer we get to truly intelligent machines, the fuzzier the lines of the Turing Test become. Conceptual questions, such as the meaning of intelligence and human behaviour, are centre stage once more.

Figure 1
physicsworld.com/wp-content/uploads/2021/05/PWMay21Bawagan-Turing-Illustration-635×278.jpg

https://www.academia.edu/   2021  Should I Be Scared of Artificial Intelligence? Mohammad Mushfequr Rahman

https://bestarion.com/12-dark-secrets-of-ai/

academia.edu   2021  Artificial General Intelligence and Creative Economy  by Konstantinos I Kotis


>AI, intelligence


venturebeat.com/ 9/6/2021 DeepMind says reinforcement learning is ‘enough’ to reach general AI by Ben Dickson

In a new paper submitted to the peer-reviewed Artificial Intelligence journal, scientists at U.K.-based AI lab DeepMind argue that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization.


physicsworld.com 12/5/2021 AI and particle physics: a powerful partnership Experimental particle physicist Jessica Esquivel explores the beneficial collaboration between artificial intelligence and particle physics that is advancing both fields