[05/22/2024] Nature Human Behaviour - Navigating the AI Frontier
Last updated
Last updated
The rapid development of generative AI has brought about a paradigm shift in content creation, knowledge representation and communication. This hot generative AI summer has created a lot of excitement, as well as disruption and concern. This Focus explores the new opportunities AI tools offer for science and society. Our authors also confront the numerous challenges intelligent machines pose and explore strategies to tackle them.
Large language models are capable of impressive feats, but the job of scientific review requires more than the statistics of published work can provide.
In Japan, people express gratitude towards technology and this helps them to achieve balance. Yet, dominant narratives teach us that anthropomorphizing artificial intelligence (AI) is not healthy.
Generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust. Some approaches used for assessing the reliability of online information may no longer work in the AI age. We offer suggestions for how research can help to tackle the threats of AI-generated disinformation.
Algorithms are designed to learn user preferences by observing user behaviour. This causes algorithms to fail to reflect user preferences when psychological biases affect user decision making. For algorithms to enhance social welfare, algorithm design needs to be psychologically informed.
Large language models can be construed as ‘cognitive models’, scientific artefacts that help us to understand the human mind. If made openly accessible, they may provide a valuable model system for studying the emergence of language, reasoning and other uniquely human behaviours.
Large language models (LLMs) are impressive technological creations but they cannot replace all scientific theories of cognition. A science of cognition must focus on humans as embodied, social animals who are embedded in material, cultural and technological contexts.
Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are commonplace. To ensure our use of LLMs does not degrade science, we must use them as zero-shot translators: to convert accurate source material from one form to another.
If mistakes are made in clinical settings, patients suffer. Artificial intelligence (AI) generally — and large language models specifically — are increasingly used in health settings, but the way that physicians use AI tools in this high-stakes environment depends on how information is delivered. AI toolmakers have a responsibility to present information in a way that minimizes harm.
State-of-the-art generative artificial intelligence (AI) can now match humans in creativity tests and is at the cusp of augmenting the creativity of every knowledge worker on Earth. We argue that enriching generative AI applications with insights from the psychological sciences may revolutionize our understanding of creativity and lead to increasing synergies in human–AI hybrid intelligent interfaces.
The rise of generative AI requires a research agenda grounded in the African context to determine locally relevant strategies for its development and use. With a critical mass of evidence on the risks and benefits that generative AI poses to African societies, the scaled use of this new technology might help to reduce rising global inequities.
The current debate surrounding the use and regulation of artificial intelligence (AI) in Brazil has social and political implications. We summarize these discussions, advocate for balance in the current debate around AI and fake news, and caution against preemptive AI regulation.
In this Perspective, the authors examine the psychological factors that shape attitudes towards AI tools, while also investigating strategies to overcome resistance when AI systems offer clear benefits.
Defining AI from the user’s perspective
Automation of intelligence. Although traditional automation uses mechanisms, tools or software to automate repetitive tasks, AI involves advanced algorithms to replicate or augment tasks typically associated with human intelligence.
Digital and physical manifestations. AI can be purely digital, such as algorithms that process data, or physically embodied, such as robots or self-driving cars. The digital or physical nature of AI could influence user perceptions and interactions.
User awareness and interaction. Although AI is used on the backend of many technologies, we emphasize AI systems for which users are directly or indirectly aware of their presence. This awareness can range from a general understanding that AI is at work (for example, in a recommendation system) to more specific knowledge about the underlying technology (for example, the use of a particular type of neural network).
Diverse underlying mechanisms. AI can be based on a multitude of algorithms and architectures. Some may be ‘opaque’ or ‘black box’, in which the relationship between inputs and outputs is complex and not easily understandable. Others might be more interpretable, with clear and intuitive mappings (also known as ‘transparent AI’ or ‘white box’). The nature of the underlying AI could in principle influence user trust, understanding and acceptance. Given the enhanced requirements for automating feats of human intelligence, most AI entails opaque algorithms.
Explainability and interpretability. Some AI systems offer explanations for their decisions. Because these explanations are generated by separate algorithms trained to generate rationales for the black-box AI behaviour, they are often approximations and may not fully capture the intricacies of the black-box algorithm itself. The degree to which an AI system is explainable can affect user trust and satisfaction.
Variability in user perceptions. Recognizing that AI spans a vast array of technologies, user perceptions, interactions and attitudes could vary substantially. Factors influencing these perceptions could in principle include the AI’s form, function and design, as well as the context in which it is used.
Artificial intelligence tools and systems are increasingly influencing human culture. Brinkmann et al. argue that these ‘intelligent machines’ are transforming the fundamental processes of cultural evolution: variation, transmission and selection.