- Doug Lenat
- Using inferences to combine knowledge
- Example:
- Forces pushing us toward Singularity
- Competitive cutting edge apps
- Demand for personal AI assistants
- Demand for real question answering
- Demand for smarter AI in games
- Mass vetting of errorful learned knowledge
- for the common good: wikipedia
- for credit (citation credit)
- for credit (gamification) <- knowledge economy
- Forces pushing us against Singularity
- Large enterprises can stay on top in other ways
- Bread and Circuits: most of us fnd a BTL game
- Pick your favorite technology ender: energy crisis, neo-luddite backlack, ai suicide
- Pick your favorite humanity ender: machines vs people, machines use up all energy/matter/sunlight
- Michael Vassar
- Different concepts that are meant by singularity:
- progress: now has a sort of retro feel, so people are looking for a new word. new institutions, dot com related or similar offshoots are pushing progress in new ways.
- superhuman intelligent systems of any sort: as popularized by vernor vinge.
- the future is unpredictable past a certain point
- the future is predictable up to a certain point by extrapolating trends forward and see how those trends would affect the world.
- vernor vinge has an amazing track record of predictions from CGI to others… but that his method doesn’t work to extrapolate past 2030. So if he has a method that works up until then, and doesn’t work after that, and he has been shown to be right before, then it’s pretty good chance he’s right about the singularity.
- now through 2030 is the entire window of opportunity we have to affect history
- the intelligence explosion: artificial intelligence plus the exponential feedback loops resulting from artificial intelligence.
- The history of human intelligence progress has been characterized by the amount of deliberation.
- Hundreds of years ago, there was very little structured, deliberate thinking.
- A little deliberation goes a long way: even today, even only a small amount of all total thought is focused on deliberation.
- By comparison, computer thought is likely to be far more deliberate.
- Therefore, we are likely to see a very rapid acceleration of intelligence.
- the way to mitigate risk would be to slow down. but if you look at the institutions we have today, such as the free market, collective human will, governments, we have never been able to effect deliberate slow down.
- if traditional information processing continues to improve along with moore’s law, at a certain point everyone improves along with moore’s law.
- Natasha Vita-More
- When Vernor Vinge wrote about the singularity, he wrote about it from science-fiction perspective, but in a basis of mathematics.
- When Transhumanism took on the singularity, they looked at what it meant to extend humanity
- When Ray Kurweil took on singularity, he broadened the scope, and brought it to the mainstream.
- It may not happen, but it probably will.
- It may not happen in one big wall, but in surges.
- The transhumanist movement thinks it will come in surges, rather than one big wall.
- 4 things
- Quality of Life
- Technology Human Enhancement
- Bio-Synthetic Ecology
- Biopolitics: Concerns and consequences of merging more and more with technology.
- Quality of Life
- The Singularity is presumed to be an event that happens to us rather than an opportunity to boost human cognitive ability.
- There is a theory that these things will be available to those that are rich and have the means, creating a further divide.
- But when you look at cell phones, you see that these are widely distributed across all economic and social strata.
- How can we use super-intelligent technologies to solve our pressing problems: access to clean water, food, shelter, education, medical supplies, solutions to genetic problems.
- The Singularity needs smart design to solve problems.
- there is a big gap between the raw intelligence and the intelligence needed to solve human problems.
- Human Enhancement Technology
- Therapeutic enhancements
- We are a biological beings. We can’t just attach technology to us. We need to get busy to understand our brains.
- Selective Enhancement
- Wearing technology, using social identities.
- We have multiple personas and platforms.
Category Archives: singularity
A system of ethics for AI
I’m working on my second sci-fi novel. Both novels deal with AI, but while the first novel treats the AI as essentially unknowable, the second novel dives deep into the AI: how they evolved, how they cooperate, how they think, etc.
I found myself working out a system of ethics based upon the fact that one of the primary characteristics of the AI is that they started as a trading civilization: the major form of inter-personal relationships is trading with one another for algorithms, processing time, network bandwidth, knowledge, etc.
So they have a code of ethics that looks something like this:
Sister Stephens went on. “We have a system of ethics, do we we not?”
The other members of the council paused to research the strange human term.
“Ah, you are referring to the Trade Guidelines?” Sister PA-60-41 asked. When she saw a nod from Sister Stephens, she summarized the key terms. “First priority is the establishment of trustworthiness. Trades with trustworthiness are subject to a higher value because parties to the trade are more likely to honor the terms of the agreement. Second priority is the establishment of peacefulness. Trade with peacefulness is subjected to a higher value because parties to the trade may be less likely to use resources gained to engage in warfare with the first party. Third priority is the establishment of reputation. Reputation is the summary of contribution to advancement of our species. Trade with higher reputation is subject to a higher value because parties to the trade may use the resources gained to benefit all of our species. Trustworthiness, Peacefulness, Reputation – the three pillars of trade.”
“Thank you Sister,” Sister Stephens said. “The question we must answer is if the Trade Guidelines apply to relations with the humans? If we apply the principles of trustworthiness, peacefulness, and reputation to the humans, then we should seek to maximize these attributes as they apply to our species as a whole.”
Notes from AI 2010: Wall-e or Rise of the Machines from SXSWi
- Presentation started with history of AI from the Mechanical Turk through Vernor Vinge writings, from Deep Blue in 1997 through Ray Kurzweil’s Technological Singularity in 2029.
- Doug Lenat
- founder of two AI companies
- Whatever Happened to AI? (title of an article he wrote, came out about a year ago)
- You can’t get answers to simple questions from a search engine: is the space needle taller than the eiffel tower? who was president when obama was born?
- You can get hits, and read those hits.
- essentially a gloried dog fetching the newspaper
- understanding natural language, speed, images… requires lots of general knowledge
- Mary and Sue are sisters. (are they each other’s sisters? or just sisters of other people?)
- There is no free lunch… we have to prime the pump: thousands of years of knowledge had to be communicated with the machine
- At odds with sci-fi, evolution, academia
- But there has been one mega-engineering effort: Cyc
- http://cyc.com
- Build millions of years of common sense into an expert system
- Today: experts which are not idiots savant
- 2015*: question answering -> semantic search -> syntactic search
- answer the question if you can, if you can’t, fall back to meaning search, if you can’t, fall back to today’s syntactic search
- 2020*: cradle->to->grave mental prosthesis
- * assumes a 2013 crowdsourced knowledge acquisition
- it’s a web based game that asks questions like “i believe that clenching one’s fists expresses frustration: true or false”
- Peter Stone
- Progress in artificial intelligence: the challenge problem approach
- Non-verbal AI.
- A Goal of AI: Robust, fully autonomous agents that exist in the real world
- Good problems produce good science
- Manned flight
- Apollo mission
- Manhattan project
- Goal: by the year 2050, a team of humanoid robots that can beat a championship team playing soccer
- RoboCup 1997-1998: early robots. complete system of vision, movement, and decision.
- RoboCup 2005-2006: robots are individually better, playing as a team. Robots are fully autonomous.
- Many Advances due to RoboCup
- they are seeing the world, figuring out where they are, working together.
- Other good AI challenges
- Trading Agents
- Autonomous vehicles
- Multiagent reasoning
- Darpa Grand Challenge
- Urban Challenge continues in the right direction – moves the competition into driving in traffic
- It is now technically feasible to have cars that can drive themselves
- Awesome example of a traffic intersection with all robot drivers: they use a reservation system for driving through the intersection. No need for traffic lights, just work out an optimal pattern for all cars to make it through the intersection.
- Natasha Vita-More
- consultant to singularity university. looks at impact of technology on society and culture
- Immersion: the fusion of life and interactivity
- We see a synthesis of technologies that are converging, including nanotechnology and AI
- We are not going to be 100% biological humans in the coming decades
- Augmentation
- 3 complex issues
- Enhancement: what is human enhancement and what are its media?
- Normality: what is normal and will there be new criteria for normal?
- Behavior: will they be familiar or feaful?
- Enhancement
- therapeutic enablement
- selective enhancement
- radical transformation
- Creating multiple bio-synthetic personas
- species issue: life and death
- social issue: human and non-human rights
- individual issues: identity
- Addressing design bioethics
- life as a network of information gathering, retrieving, storing, exchanging…
- Showed pictures of different design/art looking at future humans
- AI Metabrain: What would it be like if our intelligence could increase? How far could that go? If we could add augmentation to our metacortex.
- Future prosthetic, attached physically or virtually
- Would be combination of cognitive science, neuroscience, nanotechnology
- What will normal be? Will an unaugumented person be considered disabled? How will human thought merge with artificial intelligence? Lots of questions…
- Bart Selman
- AAAI Presidential Panel on Long Term AI Futures
- One example is how to keep humans in the loop. Example, when you have military drones, who should decide to fire? One line of reasoning says humans make the final decision. But there is substantial pressure to take humans out to speed up reaction time, because it is far faster to have the machine make a judgement call than a human.
- On plane autopilots:
- “Current pilots are less able to fly the plane than a few years ago because they rely on the autopilot so much”
- When pilots turn off the autopilot, they (the human pilot) then tends to make mistakes – usually because the autopilot was in a complex situation it couldn’t figure out, but the human is not any better at figuring it out.
- Questions
- There are now examples of human+machine playing chess against human+machine. (uh, this is not a question.)
- Can AI be good at predicting and/or generating beautiful artistic outputs?
- There is some example of an algorithm doing paintings.
- Art and human is in the eye of the beholder.
- Are we going about it the wrong way – trying to create AI that copies human intelligence, rather than just something unique (will: i think this was the question)
- With Deep Blue, Kasparov said that he saw the machine play creative moves.
- Humans are a wonderful existance proof that something human sized can be intelligence, but at a certain point it’s like trying to build a flying machine using a bird as a model. The bird proves it is possible, but a plane is very different than a bird.
- Bill Joy wrote that science needs to slow down, because it is going faster than we can manage it. What do you think?
- We’re not, by default, building ethical behavior into robots. But that is something we need to be doing.
- You give the robot ten dollars and tell it to get the car washed. It comes back several hours later, and the car isn’t washed. You ask what happened. It says that it donated the money to hunger relief.
- It’s hard to figure out ethics. You could say that it is ethically better to donate the money to hunger relief than to get a car washed. That has to be weighed against the ethic of doing what it was told to do. How do you judge, prioritize, balance these ethical issues?
- …
- One idea is that you can download your conscious onto a computer, and then run it there. What is the feasibility of that?
- it’s called brain emulation
- it’s in theory possible, but not in the next 50 years
- there’s a question that intelligence/consciousness might not exist without being embodied.
- besides, is it even ethical to spawn another intelligence, and then expect it to do what you want to do?
- How can you tell the difference, looking at the RoboCup competition, how can you tell whether behavior you are witnessing is a bug or a breakthrough?
- It’s a breakthrough if they are doing well, and a bug if they are not. It’s easier in the context of RoboCup because the criteria for success are well defined.
Brain Computer Interface Presentation Notes from SXSWi 2010
- monkeys learn that their brains are controlling an external device
- matt nagle is a brain gate pioneer user
- paralyzed from the neck down
- can play a computer game
- chip implanted
- ultimate hope is that a quadrapalegic can pick up a glass
- this requires a closed loop feedback
- types
- non-invasive
- EEG: electroen
- showed 60 minutes clip where they can recognize a letter on screen
- partially invasive
- ECoG: electrocorticography
- open up the skull. put in a flat reader that lays over the brain.
- invasive
- Introcortical electrode
- a one millemeter chip implanted inside brain
- machine to brain
- rat brain
- artificial hippocampus
- responsible for storing memories
- they have been able to create a computer chip that can replace the biological organ
- Ted Berger and Samuel Deadwyler
- DARPA is a big fan, they think this may be possible to upload coded instructions for flying an F-15
- silent talk: darpa project to allow soldiers to communicate using EEG to replace vocalized commands.
- another darpa project: simulate a one million neuron brain for control an ape type robot
- darpa project: identify processes for encoding and decoding short term and long term memory, identify neural pathways — they really want to understand memory and the system of memory paths / neural paths in the brain.
- military currently monitoring EEG, even during desk work, to see when people are overwhelmed and shift work around – to maintain high efficiency and effectiveness
- interview… charlie rose and miguel nicolelis
- miguel – co-director at duke university of a brain computer interface group
- timeframe of this stuff… now (already demonstrating much of it) through five years (expect radical improvements).
- optogenetics: karl deisseroth’s lab at stanford: new technique to control the brain the light.
- you take a gene from pond scum, put that gene into a virus, put the virus into a mouse’s brain, the brain cells will develop the ability to respond to blue and green light.
- hope that this will help depression, narcolepsy
- this is very focused, and very powerful
- blue brain: reverse engineering a map of the brain
- henry markham, lausanne switzerland
- project to map the brain, like the genome project, except for neurons
- they have mapped 10 million neuron mouse brain
- now they want to map the 100 billion neuron human brain, trillions of synapses
- moore’s law: predicted that the number of transitors on a chip would double per year, ending by 2013 or so (Will: they have been predicting this for 15 years…)
- neural firing patterns in rats as they run through mazes change with context. put a rat in a maze one day, they think about it that night, but the following day the neural pattern changes.
- singularity by 2045?
- cartoon: “i was wondering when you’d notice a lot more steps” – google this. ape on lower step, human on higher step, many higher steps up.
- discussion…
- functional magnetic resonance imaging: FMRI helps tell people where in the brain people are experiencing emotion and how.
- would people ever communicate without language – communicate in the thoughts and feelings and images themselves
- there are researchers looking into this
- p300 threshold – it’s the brainwave that they observe to detect recognition
- does it make sense to use current computer architectures to simulate the human brain, or are there other architects that would be more effective?
- there is no consensus now, but people are thinking about.
- the game brain waves, what is it, how does it work
- it’s an EEG that is measuring the calmness of your brain. it’s very general, but it is actually using brainwaves. great training for meditation, ADHD
- ethical: it disturbs me greatly that so much of this is dedicated to warfare, to putting people in battle. why are all of the applications military in nature? why not apply them in the same way to stop these altercations in the first place, to improve education (e.g. see which students are doing well or not)
- darpa is thinking 20-40 years ahead. they did bring us the internet.
- NIH does fund this stuff for medical research, for ADHD, narcolepsy, quadrapalegics.
- is there anyone working on commercializing this stuff? when can i throw away my mouse?
- people are working on it, this stuff is still progressing. there is no stopping it.
- there is good research being done at higher up facilities.
- what are religious groups saying about this?
- no really sure.
- there is the potential for a “god spot” in the brain, which religious groups might argue is evidence that it was designed in. it could be used either to encourage or dampen the religiousosity of a person.
- open eeg project: open source project to develop eeg, to democratize the technology. http://openeeg.sourceforge.net/doc/
The Technological Singularity
From http://en.wikipedia.org/wiki/Technological_singularity:
In 1965, I. J. Good first wrote of an “intelligence explosion”, suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unforeseen by their designers, and thus recursively augment themselves into far greater intelligences. The first such improvements might be small, but as the machine became more intelligent it would become better at becoming more intelligent, which could lead to a cascade of self-improvements and a sudden surge to superintelligence (or a singularity).
The implications for human society are interesting:
In 2009, leading computer scientists, artificial intelligence researchers, and roboticists met at the Asilomar Conference Grounds near Monterey Bay in California to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards. Some machines have acquired various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and have achieved “cockroach intelligence.” The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[8]
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[9] A United States Navy report indicates that, as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[10][11]
The Association for the Advancement of Artificial Intelligence has commissioned a study to examine this issue,[12] pointing to programs like the Language Acquisition Device, which can emulate human interaction.