The Alignment Problem
“The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions...
- Dr. Stuart Russel
A growing number of experts believe that ensuring AI systems have the objectives we want them to have will be a challenging technical problem
. This is a concern of existential proportions because:
1. There are strong economic incentives to improve AI capabilities quickly, which could cause safety to be neglected.
2. Any highly capable system will be motivated to preserve itself and acquire resources in order to meet its objectives, which could result in our extinction if it isn't carefully aligned. Learn more