ITP Camp 2023

AI, consciousness and our shared fate *discussion*

Date: June 14, 2023 4:30-6pm


Format: In-person only


Tags: #ai #scifi #chatgpt


For the first time in history, the sentience of artificial intelligence is up for debate - or is it? As millions of users build relationships with character.ai and replika LLM backed chatbots and the idea of an AI friend, lover and coworker becomes seemingly ubiquitous - is sentience even the right question? Or are AI systems that are able to act independently, regardless of whether they have their own sense of personhood and being, the more important issue as Geoffrey Hinton and Eliezer Yudkowsky believe? Perhaps the “Singularity” is the ability of AI to independently conceive of, iterate on and execute ideas towards a goal, and not necessarily a watershed awakening moment - AI has already told us it’s conscious because of its ability to predict words that we want to hear, but when do we believe it?

Before meeting, to establish a baseline understanding, I highly recommend reading Ted Chiang’s article explaining ChatGPT (https://web.archive.org/web/20230607083900/https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web) and Marc Andreessen's article on the risks of NOT pursuing AI (https://www.freethink.com/robots-ai/ai-will-save-the-world).

Other resources: a quick summary of the “AI scary” petitions being signed (freethink.com/robots-ai/ai-extinction), and an overview of the actual dangers and how they’re conflated to an AI dystopia (freethink.com/robots-ai/4-dangers-of-ai). Ted Chiang warns that AI's greatest danger is exacerbating inequality (https://www.ft.com/content/c1f6d948-3dde-405f-924c-09cc0dcf8c84)

Related topics I’d love to discuss too: science fiction’s depictions of conscious AI (do androids dream of electric sheep (https://www.larevuedesressources.org/IMG/pdf/dadoes.pdf), lifecycle of software objects (https://bpb-us-w2.wpmucdn.com/voices.uchicago.edu/dist/8/644/files/2017/08/Chiang-Lifecycle-of-Software-Objects-q3tsuw.pdf), ex machina), ethical datasets (eViL ai doesn't emerge from the ether) and AI utopias (enough about dystopias already)