Spanish English French German Italian Portuguese
Social Marketing
HomeBig TechsANews in AI: can DeepMind be ethical

News in AI: can DeepMind be ethical

Keeping up with an industry that evolves as quickly as AI is a difficult task. So, until an AI can do it, here's this summary of recent topics in the world of machine learning, along with notable research and experiments.

Deepmind

The AI ​​research and development laboratory owned by Google, Deepmind, launched a document in which he proposes a framework for evaluating the social and ethical risks of AI systems.

The timing of its release, which requires varying levels of involvement from AI developers, application developers, and “general public stakeholders” in AI assessment and auditing, is no accident.

Soon the AI Security Summit, a UK Government-sponsored event that will bring together international governments, leading AI companies, civil society groups and research experts to focus on how best to manage the risks of the latest advances in AI. including generative AI (e.g. ChatGPT, stable diffusion, etc.). United Kingdom is planning introduce a global advisory group on AI modeled on the UN Intergovernmental Panel on Climate Change, made up of a rotating cast of academics who will write regular reports on cutting-edge developments in AI and their associated dangers.

DeepMind is expressing its perspective, very visibly, ahead of on-the-ground policy talks at this two-day summit. And, to give credit where credit is due, the research lab raises some reasonable (if obvious) points, such as calling for approaches to examining AI systems at the "point of human interaction" and the ways these systems could be used and the impact on society.

Chart showing which people would be best at evaluating which aspects of AI.

But when weighing DeepMind's proposals, it's informative to look at the score from the lab's parent company, Google, in a study published by Stanford researchers that ranks ten top AI models based on their degree of openness.

Rated on 100 criteria, including whether its manufacturer disclosed the sources of its training data, information about the hardware it used, the work involved in training and other details, PaLM 2, one of the flagship text analysis AI models from Google, it gets a poor 40.%.

Now, DeepMind did not develop PaLM 2, at least not directly. But the lab historically hasn't been consistently transparent about its own models, and the fact that its parent company fails to meet key transparency measures suggests there isn't much top-down pressure for DeepMind to do better.

On the other hand, in addition to its public policy reflections, DeepMind appears to be taking steps to change the perception that it is silent about the architectures and internal workings of their models. The lab, along with OpenAI and Anthropic, committed several months ago to providing the UK government with “early or priority access” to its AI models to support assessment and security research.

The question is: is this merely informational? After all, no one would accuse DeepMind of philanthropy: the lab makes hundreds of millions of dollars in revenue each year, mostly by licensing its work internally to Google teams.

Perhaps the lab's next big ethical test will be Gemini, its upcoming AI chatbot, which DeepMind CEO Demis Hassabis has repeatedly promised will rival the OpenAI's ChatGPT in its capabilities. If DeepMind wants to be taken seriously on the ethical AI front, it will have to fully and thoroughly detail Gemini's weaknesses and limitations, not just its strengths. We will certainly be watching closely to see how things develop in the coming months.

In other news

Here are some other notable AI stories:

  • Microsoft study finds flaws in GPT-4: A new scientific paper associated with Microsoft analyzed the “reliability” (and toxicity) of large language models (LLMs), including OpenAI's GPT-4. The co-authors found that an older version of GPT-4 can be asked for answers more easily than other LLMs that output toxic and biased texts.
  • ChatGPT performs web searches and DALL-E 3: about OpenAI, the company has formally launched its Internet browsing feature for ChatGPT, some three weeks after reintroducing the feature in beta after several months of hiatus. In related news, OpenAI also transitioned from DALL-E3 to beta, a month after introducing the latest incarnation of the text-to-image generator.
  • GPT-4V Challengers: OpenAI is set to soon release GPT-4V, a variant of GPT-4 that understands both images and text. But two open source alternatives beat it to the punch: LLaVA-1.5 and Fuyu-8B, a model of the well-funded startup Adept. Neither is as capable as GPT-4V, but both come close and, more importantly, are free to use.
  • Can AI play Pokémon?: In recent years, a Seattle-based software engineer Peter Whidden has been training a reinforcement learning algorithm to navigate the first classic game in the Pokémon series. Currently, he only arrives at Cerulean City, but Whidden is confident that he will continue to improve.
  • AI-Powered Language Tutor: Google is targeting Duolingo with a new Google Search feature designed to help people practice (and improve) their English speaking skills. The new feature, rolling out in the coming days to Android devices in select countries, will provide interactive speaking practice for language learners translating to or from English.
  • Amazon launches more warehouse robots: at an event, Amazon announced that will begin testing the Agility biped robot in its facilities, Digit. However, reading between the lines, there is no guarantee that Amazon will actually begin implementing Digit in its warehouse facilities, which currently use more than 750.000 robotic systems.
  • Simulators about simulators: The same week that Nvidia demonstrated how to apply an LLM to help write reinforcement learning code to guide a naive AI-powered robot to better perform a task, Meta released Habitat 3.0. The latest version of the Meta dataset for training AI agents in realistic indoor environments. Habitat 3.0 adds the possibility for human avatars to share the space in virtual reality.
  • China's tech titans invest in rival OpenAI: Zhipu AI, a China-based startup developing AI models to rival those of OpenAI and others in the generative AI space, announced which has raised 2.500 billion yuan ($340 million) in total funding to date this year. The announcement comes as geopolitical tensions between the United States and China rise, and show no signs of abating.
  • US chokes off China's AI chip supply: On the topic of geopolitical tensions, the Biden administration this week announced a series of measures to curb Beijing's military ambitions, including further restrictions on chip shipments. Artificial Intelligence from Nvidia to China. A800 and H800, the two AI chips that Nvidia specifically designed to continue shipping to China and that will be affected by these new rules.
  • AI Pop Song Replays Go Viral: appears covers a curious trend: tiktok accounts that use AI to make characters like Homer Simpson sing rock songs from the 90s and 2000s like "Smells Like Teen Spirit”. They are fun on the surface, but there is a dark undertone to the entire practice.

More Machine Learning

Machine learning models consistently lead to advances in life sciences. AlphaFold and RoseTTAFold were examples of how a persistent problem (protein folding) could, in fact, be trivialized with the right AI model. Now David Baker (creator of this latest model) and his lab colleagues have expanded the prediction process to include more than just the structure of the relevant amino acid chains. After all, proteins exist in a soup of other molecules and atoms, and predicting how they will interact with compounds or elements lost in the body is essential to understanding their actual form and activity. RoseTTAFold All-Atom It is a great step forward for the simulation of biological systems.

MIT/Harvard University

Having visual AI that enhances lab work or acts as a learning tool is also a great opportunity. The MIT and Harvard SmartEM project places a computer vision system and an ML control system inside a scanning electron microscope, which together drive the device to intelligently examine a sample. You can avoid areas of low importance, focus on interesting or clear areas and also intelligently label the resulting image.

Using AI and other high-tech tools for archaeological purposes never goes out of style (so to speak). Whether it's a lidar (a sensor that emits pulses of light uninterruptedly and captures their returns) revealing Mayan cities and roads or filling in the gaps of incomplete ancient Greek texts, it's always interesting to watch. And this reconstruction of a scroll thought to have been destroyed in the volcanic eruption that devastated Pompeii is one of the most impressive yet.

ML-interpreted CT scan of a rolled and burned papyrus. The visible word says "Purple."

Luke Farritor, a computer science student at the University of Nebraska-Lincoln, trained a machine learning model to amplify subtle patterns in scans of the charred, rolled papyrus that are invisible to the naked eye. Theirs was one of many methods attempted in an international challenge to read the scrolls, and could be refined for valuable academic work. The information is published by Nature.. What was on the parchment? So far, only the word “purple,” but even that makes papyrologists lose their minds.

Another academic victory for AI is in this system to examine and suggest citations on Wikipedia. Of course, the AI ​​doesn't know what is true or factual, but it can gather from context what a high-quality Wikipedia article and quote looks like, and look for alternatives on the site and the web. No one is suggesting that we let robots run the famous user-driven online encyclopedia, but it could help bolster articles for which citations are missing or the editors are unsure.

Example of a mathematical problem solved by Llemma.

Language models can be fine-tuned on many topics and, surprisingly, advanced mathematics is one of them. Llemma is a new open model prepared for mathematical tests and articles that can solve quite complex problems. It's not the first: Google Research's Minerva is working on similar capabilities, but its success on similar problem sets and improved efficiency show that "open" models (whatever that term is) are competitive in this space. It is undesirable for certain types of AI to be dominated by proprietary models, so open replication of their capabilities is valuable even if it does not break new ground.

It is worrying that Meta is making progress in his own academic work towards mind reading, but as with most studies in this area, the way it is presented overstates the process. In an article titled “Brain Decoding: Towards Real-Time Reconstruction of Visual Perception,” It may seem as if they are reading minds.

Images shown to people (left) and generative AI guesses what the person is perceiving (right).

But it's a little more indirect than that. By studying what a high-frequency brain scan looks like when people look at images of certain things, such as horses or airplanes, researchers can make near real-time reconstructions of what they think the person is thinking or looking at. Still, it seems likely that generative AI has a role to play here, in how it can create a visual expression of something, even if it doesn't directly correspond to the scans.

¿We should Using AI to read people's minds, if it's ever possible? Let's ask DeepMind (see above).

Finally, a project in LAION which for the moment is more aspirational than concrete, but equally laudable. Multilingual Contrastive Learning for Audio Representation Acquisition, or CLARA, aims to give language models a better understanding of the nuances of human speech. Do we know how sarcasm or a lie can be detected from subverbal cues such as tone or pronunciation? Machines are pretty bad at that, which is bad news for any human-AI interaction. CLARA uses a multi-language audio and text library to identify some emotional states and other non-verbal “speech understanding” cues.

RELATED

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks