Exploring the intersection of consciousness, the nature of intelligence, computer science, and philosophy.

AI BGRD.png

Large Language Models

 

Ethical concerns surrounding the potential misuse of AI and the biases within training data prompt deeper philosophical discussions about responsibility, fairness, and the future of AI-human interactions.

 
 

Large Language Models learn from vast datasets, capturing human knowledge and creativity; yet, the mechanics behind their ability to generate contextually relevant and coherent text remain largely unexplored.

Despite the lack of explicit programming for specific tasks, these models excel in diverse applications, raising questions about the true nature of artificial intelligence and its limits.

While accurately generating human-like text, Large Language Models occasionally produce nonsensical or irrelevant responses, posing intriguing questions about their understanding of context and logic.

The emergence of creative outputs, such as poetry and storytelling, sparks curiosity about whether these models possess a form of artistic understanding or merely mimic patterns they've encountered.

 

One of the most mysterious and surprising aspects of large language models is their ability to generate creative, coherent, and contextually relevant text without explicit instructions for specific tasks. They can perform various applications like translation, summarization, and even create poetry and stories, despite not being explicitly programmed to do so.

This ability to generalize across a wide range of tasks and produce outputs that exhibit human-like understanding has astonished researchers and users alike. It raises profound questions about the nature of intelligence, the limits of artificial intelligence, and the mechanisms that underlie these models' remarkable performance in diverse areas.

cjonesM4_abstract_visual_of_data_17a018e6-3501-458e-a490-bf0f3eb77fc7.jpg

On Intelligence, Self-Awareness & the Nature of Consciousness

Exploring the intersection of consciousness, the nature of intelligence, computer science, and philosophy.

 

This is one of my middle-of-the-night muses. A brief and wholly inadequate glance at the intersection of AI, the nature of intelligence, and consciousness.

What keeps me up at night on the edge of wonder is when science and philosophy become enmeshed.

Here's an example of where science and philosophy make contact: when individual parts combine, they become something entirely new. Take for example, a single neuron: a single part with potential. But put enough of them together, and a nerve emerges. Continue to add nerves (oversimplifying here), and you create a neural network, then eventually a brain. At what point does any neural network become intelligent? When does rudimentary intelligence become conscious? Then of course, there's the enigma of consciousness.

It was almost yesterday in human history people thought consciousness emerged only when language emerged. So, according to science in yesteryear, babies weren't conscious until they started speaking. Funny, right? I'll sound guilty of presentism when I say society was adorable back then - but honestly, we're no less adorable and unaware today. Only, we have mobile devices now and the jury is still out if that is a blessing or a curse for humanity.

At a level I haven't felt in years, AI has stirred the inner skeptic, curious explorer, and the deepest dreamer in me; and gosh, are the three of them dancing.

Of course, I used a lot of my AI creations in this video because, you know ... AI.

AI+BGRD.jpg

 

What Do We Know About Our Minds?: A Conversation with Paul Bloom (Episode #317)

Sam Harris speaks with Paul Bloom about the state of psychological science. They discuss fiction as a window onto the mind, recent developments in AI, the tension between misinformation and free speech, bullshitting vs lying, truth vs belonging, reliance on scientific authority, the limits of reductionism, consciousness vs intelligence, Freud, behaviorism, the unconscious origins of behavior, confabulation, the limitations of debate, language, Koko the gorilla, mental health, happiness, behavioral genetics, birth-order effects, living a good life, the remembered and experiencing selves, and other topics. Paul Bloom is Professor of Psychology at the University of Toronto, and Brooks and Suzanne Ragen Professor Emeritus of Psychology at Yale University. Paul Bloom studies how children and adults make sense of the world, with special focus on pleasure, morality, religion, fiction, and art. He has won numerous awards for his research and teaching. He is past-president of the Society for Philosophy and Psychology, and co-editor of Behavioral and Brain Sciences. He has written for scientific journals such as Nature and Science, and for popular outlets such as The New York Times, The Guardian, The New Yorker, and The Atlantic Monthly. He is the author of seven books, including his latest Psych: The Story of the Human Mind.

 

The A.I. Dilemma - March 9, 2023The A.I. Dilemma - March 9, 2023

Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th, 2023 with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. We encourage viewers to consider calling their political representatives to advocate for holding hearings on AI risk and creating adequate guardrails.

 

The Dangers Of AI Are WEIRDER Than You Think! | Yoshua Bengio

The launch of ChatGPT broke records in consecutive months between December 2022 and February 2023. With over 1 billion visits a month for ChatGPT, over 100,000 users and $45 million in revenue for Jasper A.I., the race to adopting A.I. at scale has begun.

Does the global adoption of artificial intelligence have you concerned or apprehensive about what’s to come? On one hand it’s easy to get caught up in the possibilities of co-existing with A.I. living the enhanced upgraded human experience. We already have tech and A.I. integrated into so many of our daily habits and routines: Apple watches, ora rings, social media algorithms, chat bots, and on and on. Yoshua Bengio has dedicated more than 30 years of his computer science career to deep learning. He’s an award winning computer scientist known for his breakthroughs in artificial neural networks. Why after 3 decades contributing to the advancement of A.I. systems is Yoshua now calling to slow down the development of powerful A.I. systems?

This conversation is about being open-minded and aware of the dangers of AI we all need to consider from the perspective of one of the world’s leading experts in artificial intelligence. Conscious computers, A.I. trolls, and the evolution of machines and what it means to be a neural network are just a few of the things you’ll find interesting in this conversation.

QUOTES:

“We need to be maybe much more careful and provide much more of guidance and guardrails in regulation, to minimize potential harm that could come out of more and more powerful systems.”

“I would say misinformation, disinformation is the greatest large-scale danger.”

“With AI becoming more powerful, I think it's time to really accelerate that process of regulating to protect the public, and society.”

 

The AI revolution: Google's developers on the future of artificial intelligence | 60 Minutes
"60 Minutes" is the most successful television broadcast in history. Offering hard-hitting investigative reports, interviews, feature segments and profiles of people in the news, the broadcast began in 1968 and is still a hit, over 50 seasons later, regularly making Nielsen's Top 10.

 

Mathematician debunks AI intelligence | Edward Frenkel and Lex Fridman

Edward Frenkel is a mathematician at UC Berkeley working on the interface of mathematics and quantum physics. He is the author of Love and Math: The Heart of Hidden Reality.

 

Making Sense of Artificial Intelligence | November 22, 2022

Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating. In this episode, we explore the landscape of Artificial Intelligence. We’ll listen in on Sam’s conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI — including the control problem and the value-alignment problem — as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence. We’ll then be introduced to philosopher Nick Bostrom’s “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance. We’ll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We’ll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we’re building using “Deep Learning” are really marching us towards our super-intelligent overlords. Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe. SUBSCRIBE to gain access to all full-length episodes of the podcast at https://samharris.org/subscribe/