Jump to content
In the Name of God بسم الله

Artificial Intelligence Pioneer Quits Google / Issues Warning


Recommended Posts

  • Advanced Member

Also, Elon Musk warned yet again about the potential dangers of AI in a recent interview (April 17)…even though he has a vested interest in the growth of AI he warned of "civilization destruction"

 https://www.nbcnews.com/tech/tech-news/artificial-intelligence-pioneer-leaves-google-warns-technologys-future-rcna82242?cid=sm_npd_nn_fb_ma&utm_campaign=trueanthem&utm_medium=social&utm_source=facebook&fbclid=IwAR0PSKgxTZPHum6HnNw4Q31bNYd9L2RKTc1Gi-4IsNW9bPdBLaWHKXiB8Sg&_branch_match_id=1134714500896296087&_branch_referrer=H4sIAAAAAAAAA8soKSkottLXz0tKzkstL9ZLLCjQy8nMy9ZPLCv0NUpLMy6qSrJPS0rOyUyx9Sx3DDIICPZOrwiJCvAozTXzyPMrNwk0Nkzyi0yx9DEK8g5JNnTP1DXxLPYLt0wKSHHySQz38I7IdLIITgcA1lf%2FzWkAAAA%3D

The "godfather of AI" is issuing a warning about the technology he helped create.

Geoffrey Hinton, a trailblazer in artificial intelligence, has joined the growing list of experts sharing their concerns about the rapid advancement of artificial intelligence. The renowned computer scientist recently left his job at Google to speak openly about his worries about the technology and where he sees it going. 

 

“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton said in an interview with The New York Times.

Hinton is worried that future versions of the technology pose a real threat to humanity.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said in the interview. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Hinton, 75, is most noted for the rapid development of deep learning, which uses mathematical structures called neural networks to pull patterns from massive sets of data.

Like other experts, he believes the race between Big Tech to develop more powerful AI will only escalate into a global race.

 

Hinton tweeted Monday morning that he felt Google had acted responsibly in its development of AI, but that he had to leave the company to speak out.

Jeff Dean, senior vice president of Google Research and AI, said in an emailed statement: “Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions at Google. I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well! As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

Hinton is a notable addition to a group of technologists that have been speaking out publicly about the unbridled development and release of AI.

Tristan Harris and Aza Raskin, the co-founders of the Center for Humane Technology, spoke with “Nightly News” host Lester Holt in March about their own concerns around AI. 

“What we want is AI that enriches our lives. AI that works for people, that works for human benefit that is helping us cure cancer, that is helping us find climate solutions,” Harris said during the interview. “We can do that. We can have AI and research labs that’s applied to specific applications that does advance those areas. But when we’re in an arms race to deploy AI to every human being on the planet as fast as possible with as little testing as possible, that’s not an equation that’s going to end well.”

An open letter from the Association for the Advancement of Artificial Intelligence, which was signed by 19 current and former leaders of academic society, was released last month warning the public of the risks around AI and the need for collaboration to mitigate some of those concerns.

“We believe that AI will be increasingly game-changing in healthcare, climate, education, engineering, and many other fields,” the letter said. “At the same time, we are aware of the limitations and concerns about AI advances, including the potential for AI systems to make errors, to provide biased recommendations, to threaten our privacy, to empower bad actors with new tools, and to have an impact on jobs.”

Hinton, along with scientists Yoshua Bengio and Yann LeCun, won the Turing Award in 2019, known as the tech industry’s version of the Nobel Prize, for their advancements in AI.

Hinton, Bengio and LeCun were open about their concerns with AI but were optimistic about the potential of the technology, including detecting health risks earlier than doctors and more accurate weather warnings about earthquakes and floods.

“One thing is very clear, the techniques that we developed can be used for an enormous amount of good affecting hundreds of millions of people,” Hinton previously told The Associated Press.

 
Link to comment
Share on other sites

  • Advanced Member

https://www.cnn.com/2023/04/17/tech/elon-musk-ai-warning-tucker-carlson/index.html#:~:text=Elon Musk warned in a,including a rumored new venture.

 

Elon Musk warned in a new interview that artificial intelligence could lead to “civilization destruction,” even as he remains deeply involved in the growth of AI through his many companies, including a rumored new venture.

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction,” Musk said in his interview with Tucker Carlson, which is set to air in two parts on Monday and Tuesday nights.

Musk has repeatedly warned recently of the dangers of AI, amid a proliferation of AI products for general consumer use, including from tech giants like Google and Microsoft. Musk last month also joined a group of other tech leaders in signing an open letter calling for a six month pause in the “out of control” race for AI development.

Musk said Monday night he supports government regulation into AI, even though “it’s not fun to be regulated.” Once AI “may be in control,” it could be too late to place regulations, Musk said.

“A regulatory agency needs to start with a group that initially seeks insight into AI, then solicits opinion from industry, and then has proposed rule-making,” Musk said.

In fact, Musk has been sounding alarms about AI for years – something he acknowledged in a tweet over the weekend – but he has also been a part of the broader AI arms race through investments across his sprawling empire of companies.

Tesla, for example, relies so much on artificial intelligence that it hosts an annual AI day to tout its work. Musk was a founding member of OpenAI, the company behind products like ChatGPT (Musk has said the evolution of OpenAI is “not what I intended at all.”) And at Twitter, Musk said in a tweet last month that he plans to “use AI to detect & highlight manipulation of public opinion on this platform.”

To Carlson, Musk said he put “a lot of effort” into creating OpenAI to serve as a counterweight to Google, but took his “eye off the ball.”

Now, Musk said he wants to create a rival to the AI offerings by tech giants Microsoft and Google. In his interview with Carlson, Musk said “we’re going to start something which I call TruthGPT.” Musk described it as a “maximum truth-seeking AI” that “cares about understanding the universe.”

“Hopefully there’s more good than harm,” Musk said.

More recently, Musk is reportedly working to build a generative AI startup that could rival OpenAI and ChatGPT. The Financial Times reported last week that Musk is building a team of AI researchers and engineers, as well as seeking investors for a new venture, citing people familiar with the billionaire’s plans. Musk last month incorporated a company called X.AI, the report says, citing Nevada business records.

During his conversation with Carlson, Musk addressed his ownership of Twitter — which he bought for $44 billion and has been engaged in controversy since.

“I thought there’d probably be some negative reactions,” Musk told Carlson, saying the public will ultimately decide the app’s future.

The main account for the New York Times lost its blue check mark earlier this month, which had previously told CNN it would not pay for verification.

“There’s obviously a lot of organizations that are used to having sort of unfettered influence on Twitter that no longer have that,” Musk said, appearing to give the 171-year-old newspaper advice on how to manage the content of its account, calling its feed “unreadable.”

Musk said he was an active Twitter user since 2009 and started developing a “bad feeling” about where the app was heading, but did not specify what it was. He said he later decided to acquire the platform after unsatisfying conversations with its board and management.

 
 
Enter your email to subscribe to the
 
Link to comment
Share on other sites

  • 2 months later...
  • 4 months later...
  • Advanced Member
On 5/2/2023 at 9:06 AM, Eddie Mecca said:

he warned of "civilization destruction"

Salam

‘The Gospel’: how Israel uses AI to select bombing targets in Gaza

Concerns over data-driven ‘factory’ that significantly increases the number of targets for strikes in the Palestinian territory

Quote

 short statement on the IDF website claimed it was using an AI-based system called Habsora (the Gospel, in English) in the war against Hamas to “produce targets at a fast pace”.

 

Quote

The IDF said that “through the rapid and automatic extraction of intelligence”, the Gospel produced targeting recommendations for its researchers “with the goal of a complete match between the recommendation of the machine and the identification carried out by a person”.

Multiple sources familiar with the IDF’s targeting processes confirmed the existence of the Gospel to +972/Local Call, saying it had been used to produce automated recommendations for attacking targets, such as the private homes of individuals suspected of being Hamas or Islamic Jihad operatives.

 

Quote

This article also draws on testimonies published by the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, which have interviewed several current and former sources in Israel’s intelligence community who have knowledge of the Gospel platform.

Their comments offer a glimpse inside a secretive, AI-facilitated military intelligence unit that is playing a significant role in Israel’s response to the Hamas massacre in southern Israel on 7 October.

 

Quote

Israel’s military has made no secret of the intensity of its bombardment of the Gaza Strip. In the early days of the offensive, the head of its air force spoke of relentless, “around the clock” airstrikes. His forces, he said, were only striking military targets, but he added: “We are not being surgical.”

 

Quote

As Israel resumes its offensive after a seven-day ceasefire, there are mounting concerns about the IDF’s targeting approach in a war against Hamas that, according to the health ministry in Hamas-run Gaza, has so far killed more than 15,000 people in the territory.

The IDF has long burnished its reputation for technical prowess and has previously made bold but unverifiable claims about harnessing new technology. After the 11-day war in Gaza in May 2021, officials said Israel had fought its “first AI war” using machine learning and advanced computing.

The slowly emerging picture of how Israel’s military is harnessing AI comes against a backdrop of growing concerns about the risks posed to civilians as advanced militaries around the world expand the use of complex and opaque automated systems on the battlefield.

Quote

“Other states are going to be watching and learning,” said a former White House security official familiar with the US military’s use of autonomous systems.

 

From 50 targets a year to 100 a day

In early November, the IDF said “more than 12,000” targets in Gaza had been identified by its target administration division.

Describing the unit’s targeting process, an official said: “We work without compromise in defining who and what the enemy is. The operatives of Hamas are not immune – no matter where they hide.”

The activities of the division, formed in 2019 in the IDF’s intelligence directorate, are classified.

Quote

Aviv Kochavi, who served as the head of the IDF until January, has said the target division is “powered by AI capabilities” and includes hundreds of officers and soldiers.

In an interview published before the war, he said it was “a machine that produces vast amounts of data more effectively than any human, and translates it into targets for attack”.

 

According to Kochavi, “once this machine was activated” in Israel’s 11-day war with Hamas in May 2021 it generated 100 targets a day. “To put that into perspective, in the past we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets a single day, with 50% of them being attacked.”

Quote

“That is a lot of houses,” the official told +972/Local Call. “Hamas members who don’t really mean anything live in homes across Gaza. So they mark the home and bomb the house and kill everyone there.”

Targets given ‘score’ for likely civilian death toll

In the IDF’s brief statement about its target division, a senior official said the unit “produces precise attacks on infrastructure associated with Hamas while inflicting great damage to the enemy and minimal harm to non-combatants”.

The precision of strikes recommended by the “AI target bank” has been emphasised in multiple reports in Israeli media. The Yedioth Ahronoth daily newspaper reported that the unit “makes sure as far as possible there will be no harm to non-involved civilians”.:blabla::angry:

Quote

A former senior Israeli military source told the Guardian that operatives use a “very accurate” measurement of the rate of civilians evacuating a building shortly before a strike. “We use an algorithm to evaluate how many civilians are remaining. It gives us a green, yellow, red, like a traffic signal.”

However, experts in AI and armed conflict who spoke to the Guardian said they were sceptical of assertions that AI-based systems reduced civilian harm by encouraging more accurate targeting.

A lawyer who advises governments on AI and compliance with humanitarian law said there was “little empirical evidence” to support such claims. Others pointed to the visible impact of the bombardment.

Quote

“Look at the physical landscape of Gaza,” said Richard Moyes, a researcher who heads Article 36, a group that campaigns to reduce harm from weapons.

Satellite images of the northern city of Beit Hanoun in Gaza before (10 October) and after (21 October) damage caused by the war.  

“We’re seeing the widespread flattening of an urban area with heavy explosive weapons, so to claim there’s precision and narrowness of force being exerted is not borne out by the facts.”

Quote

Multiple sources told the Guardian and +972/Local Call that when a strike was authorised on the private homes of individuals identified as Hamas or Islamic Jihad operatives, target researchers knew in advance the number of civilians expected to be killed.

The source said there had been occasions when “there was doubt about a target” and “we killed what I thought was a disproportionate amount of civilians”

‘Mass assassination factory’

Sources familiar with how AI-based systems have been integrated into the IDF’s operations said such tools had significantly sped up the target creation process.

“We prepare the targets automatically and work according to a checklist,” a source who previously worked in the target division told +972/Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”

Quote

For some experts who research AI and international humanitarian law, an acceleration of this kind raises a number of concerns.

Dr Marta Bo, a researcher at the Stockholm International Peace Research Institute, said that even whenhumans are in the loop” there is a risk they develop “automation bias” and “over-rely on systems which come to have too much influence over complex human decisions”.


Moyes, of Article 36, said that when relying on tools such as the Gospel, a commander “is handed a list of targets a computer has generated” and they “don’t necessarily know how the list has been created or have the ability to adequately interrogate and question the targeting recommendations”.

 

“There is a danger, he added, “that as humans come to rely on these systems they become cogs in a mechanised process and lose the ability to consider the risk of civilian harm in a meaningful way.”

https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets

 

Link to comment
Share on other sites

  • 3 weeks later...

Join the conversation

You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...