If you’re taking Intro to Physical Computing and you’re not sure where to go, start with the syllabus menu above and follow the links associated with each week.
All computing is physical. We work with computational systems by taking action with our bodies, on devices. The construction of computing devices, and their use, consumes raw materials and energy as well. In short, the virtual always has physical consequences.
This course is about how to design those devices for our bodies. Physical Computing is an approach to learning how humans communicate through computers that starts by considering how humans express themselves physically. In this course, we take the human body as a given, and attempt to design computing applications within the limits of its expression.
To realize this goal, you’ll learn how a computer converts the changes in energy given off by our bodies (in the form of sound, light, motion, and other forms) into changing electronic signals that it can read and interpret. You’ll learn about the sensors that do this, and about simple computers called microcontrollers that read sensors and convert their output into data. Finally, you’ll learn how microcontrollers communicate with other computers.
Computer interface design instruction often takes the computer hardware for given — namely, that there is a keyboard, a screen, speakers, and a mouse or trackpad or touchscreen — and concentrates on teaching the software necessary to design within those boundaries. In physical computing, we take the human body and its capabilities as the starting point, and attempt to design interfaces, both software and hardware, that can sense and respond to what humans can physically do.
Starting with a person’s capabilities requires an understanding of how a computer can sense physical action. When we act, we cause changes in various forms of energy. Speech generates the air pressure waves that are sound. Gestures change the flow of light and heat in a space. Electronic sensors can convert these energy changes into changing electronic signals that can be read and interpreted by computers. In physical computing, we learn how to connect sensors to the simplest of computers, called microcontrollers, in order to read these changes and interpret them as actions. Finally, we learn how microcontrollers communicate with other computers in order to connect physical action with multimedia displays.
Physical computing takes a hands-on approach, which means that you spend a lot of time building circuits, soldering, writing programs, building structures to hold sensors and controls, and figuring out how best to make all of these things relate to a person’s expression.
Cool. So we’ll build all kinds of robots?
Not quite. While the hardware skills used in physical computing are similar to those used in robotics, the concepts are a bit different. When you build robots, you’re usually focused on making devices that are autonomous, capable of navigating through the world on its own. Physical computing systems, in contrast, focus on interaction with the human. Rather than automation, we focus on using digital technologies to extend human capabilities, creating systems that are driven by a person’s intentions, decisions and actions. Where a robotics course might focus on the mechanics, drive and sensing systems of a robot, a physical computing course might concentrate more on the interface, both hardware and software, necessary for a human to direct that robot.
What will I learn in this class, and what should I know in advance?
There are three broad areas you’ll learn about in this course:
- you’ll get an introduction of microcontroller electronics, in order to understand how sensors and actuators work and how they are controlled by computers;
- you’ll learn the rudiments of programming microcontrollers, and how to interface them to other computers via serial communication;
- you’ll learn how to think about physical interaction design starting with observation of what the user physically does and then planning the best ways to sense and respond to that action.
This course assumes no prior knowledge of any of these subjects, but it does require a lot of out-of-class time and effort. Most of the real work happens outside of class, both in the shop building and programming, and in the world observing people to understand how their actions reflect their intentions.
Many people take an introductory programming course in parallel with a physical computing course. If you’ve done some web-based user interface programming in JavaScript, you’re in good shape. Likewise, if you’ve learned Java, Processing, C, C++, Python, Ruby, or most any other programming language, you’ll have enough of the basics necessary to get going.
You don’t need any prior background in electronics for this course. You’ll learn just enough in this class to connect a variety of sensors and actuators to a microcontroller so that you can realize your ideas.
This isn’t primarily an electronics course or a programming course or a design course. Just as there are complementary courses that go more into depth in programming, there are also complementary classes that go more into depth in electronics. This course is a broad overview of techniques used in physical interaction design.