When I was a kid, the computer was a "great electronic brain." Much was made of the computer being human's first "thinking machine." When Seymour Papert first started teaching Logo in the 1960's, students were told that they were "teaching" the computer new processes, new ideas, and new ways of doing things. That was inspiring rhetoric.
Why did we shift away from that rhetoric? I imagine that it was because it was dangerous. Claiming that the computer was "thinking" implied that it was "thinking like a human," and that led to trusting what came out of the computer. Whatever the computer said must be as true as what a human might say. Claiming "thinking" feels audacious, even presumptuous today. Moving away from the "electronic brain" rhetoric hasn't actually improved matters much in terms of avoiding blind faith in computing. Consider how many people believe something because they read it "on the Internet," or fall for email scams.
The "electronic brain" rhetoric is inspiring and fundamentally true. A computer is the first device that humans have made that can think, in the sense of making decisions, conducting processes, and creating new products. Yes, electromechanical systems make decisions, but not nearly at the scale of what a modern electronic computer can, and scale really matters. Our neurons aren't that much different from lesser animals, but the number and complexity result in conscious beings. We do not yet know the limits of computational processing with regards to thinking. We have a suspicion that, as the complexity increases, the closer we get to real thinking.
Human-scale intelligence might be possible to replicate in a computer. Answering that question is a Grand Challenge not just for computer science, but for all humanity. It's a challenge that spans genders and ethnicities. The smaller form of the question is important for every discipline: How much of the thinking and processing that is done in my field can be encoded in an algorithm and handed off to a computer? How will such handing-off of thinking change my field? What we see in science and engineering suggests that the answer to that second question is "Profoundly, changing the very nature of what you can do and how you operate."
I was reminded of the "electronic brain" rhetoric when I read about Gerald Sussman's comments on why MIT switched from Scheme to Python in their introductory computing course. One of his comments, on how programming has fundamentally changed, really impressed me: "The fundamental difference is that programming today is all about doing science on the parts you have to work with." Programming is no longer just about reflecting on a problem and construction a solution, he suggests. Programming is about assembling the parts that we have to work with and "doing science" on the sum of the parts. A key piece of the new process of this is that we do not know how the parts work, only what they do. The parts have their own "thinking" to them, making decisions and dealing with data that is invisible to us, even as programmers. This new view of programming is about shifting up levels of abstraction. We're getting closer to assembling units of thinking. A science of that is bold, noble, presumptuous, and audacious.
In talking to people about CRA-E, I've realized how important Grand Challenges and Noble (even Nobel) Pursuits have to do with recruitment. People go into a field to make money, yes. The people who are most inspired and whom you most want in your discipline pursue it because they see the field as important. We need to get back to that in computing. Yes, recruitment into computing is about jobs and the fear of off-shoring. It's also about solving important problems and increasing our understanding of what it means to be human. That's something to inspire great human brains.
No entries found