The Internet of Things Is the Next Digital Evolution—What Will It Mean?
As digital technology infuses everyday life, it will change human behavior—raising new challenges about equality and fairness.
In a single generation, this has become the new normal: Nearly all adult Americans use the internet, with three-fourths of them having broadband access in their homes. And the internet travels with them in their pockets—95 percent have a cellphone, 81 percent have a smartphone. This ability to constantly connect has changed how people interact, especially in their social networks—more than two-thirds of adults are on Facebook or Twitter or another social media platform.
Digital innovations have made it easier for people to find more information than ever before, and made it easier to create and share material with others. From smartphone-delivered directions to voice-driven queries to on-demand news, people’s lives have been transformed by these technologies. Yet today’s inventions and innovations mark only the start, and tomorrow’s digital disruption, which is already underway, will probably dwarf them in impact.
The next digital evolution is the rise of the internet of things—sometimes now called the “internet on things.” This refers to the growing phenomenon of building connectivity into vehicles, wearable devices, appliances and other household items such as thermostats, as well as goods moving through business supply chains. It also covers the rapid spread of data-emitting or tracking sensors in the physical environment that give readouts on everything from crop conditions to pollution levels to where there are open parking spaces to babies’ breathing rates in their cribs.
The Pew Research Center and Elon University in North Carolina invited hundreds of technology experts in 2014 to predict the future of the internet by the year 2025, and the overriding theme of their answers addressed this reality. They predicted that the growth of the internet of things will soon make the internet like electricity—less visible, yet more deeply embedded in people’s lives, for good and for ill.
The internet of things will have literally life-changing impact on innovation and the application of knowledge in the coming years. Here are four major developments to anticipate.
The emergence of the ‘datacosm’
The spread of the internet of things will accelerate the digitization of data, spawning creation of record amounts of information. Data and connectivity will be ubiquitous in an environment sometimes called the “datacosm”— a term used to describe the importance of data, analytics, and algorithms in technology’s evolution. As previous information revolutions have taught us, once people—and things—get more connected, their very nature changes.
“When we are connected, power shifts. It changes who we are, what we might expect, how we might be manipulated, attacked, or enriched,” writes Joshua Cooper Ramo in his new book, The Seventh Sense. Networks of constant connection “destroy the nature of even the most solid-looking objects.” Connected things and connected people become more useful, more powerful, but also more hair-trigger and more destructive because their power is multiplied by a networking effect. The more connections they have, the more capacity they have for good and harmful purposes.
On the human level, the datacosm arising from the internet of things could function like a “fifth limb,” an extra brain lobe, and another layer of “skin” because it will be enveloping and omnipresent. People will have unparalleled self-awareness via their “lifestreams”: their genome, their current physical condition, their memories, and other trackable aspects of their well-being. Data ubiquity will allow reality to be augmented in helpful—and creepy—ways.
For instance, people will be able to look at others and, thanks to facial recognition and digital profiling, simultaneously browse their digital dossiers through an app that could display the data on “smart” contact lenses or a nearby wall surface. They will gaze at artifacts such as paintings or movies and be able to download material about how the art was created and the life story of the creator. They will take in landscapes and cityscapes and be able to learn quickly what transpired in these places long ago or what kinds of environmental problems threaten them. They will size up buildings and have an overlay of insight about what takes place inside them.
Part of the reason that data will be infused into so much is that the interfaces of connectivity and the ability to summon data will be radically enhanced. Human voices, haptic interfaces that can be manipulated by finger movements (think of the movie “Minority Report”), real-time language translators, data dashboards that give readouts on a user’s personally designed webpage, even, eventually, brain-initiated commands will make it possible for people to bring data into whatever surroundings they find themselves. Not only will this allow people to apply knowledge of all kinds to their immediate circumstances, but it will also advance analysts’ understanding of entire populations as their “data exhaust” is captured by their GPS-enabled devices and web clickstream activity.
When so much personal data is captured, how can people retain even a sliver of privacy?
Many experts in the Pew Research Center’s canvassings expect major benefits to emerge from this growth and spread of data, starting with the fact that knowledge will be ever-easier to apply to real-time decisions such as which custom-designed medicine a person should receive, or which commuting route to take to work. Beyond that, this data overlay and growing analytic power will allow swifter interventions when public health problems arise, weather emergencies threaten, environmental stressors mount, educational programs are introduced, and products are brought to the market.
This new reality will also cause major hardships. When information is superabundant, what is the best way to find the best knowledge and apply it to decisions? When so much personal data is captured, how can people retain even a sliver of privacy? What mechanisms can be created to overcome polarizing propaganda that can weaken societies? What are the right ways to avoid “fake news,” disinformation, and distracting sideshows in a world of info-glut?
Struggles over people’s “right relationship” to information will be one of the persistent realities of the 21st century.
Growing reliance on algorithms
The explosion of data has given prominence to algorithms as tools for finding meaning in data and using it to shape decisions, predict humans’ behavior, and anticipate their needs. Analysts such as Aneesh Aneesh of the University of Wisconsin, Milwaukee, foresee algorithms taking over public and private activities in a new era of “algocratic governance” that supplants the way current “bureaucratic hierarchies” make government decisions. Others, like Harvard University’s Shoshana Zuboff, describe the emergence of “surveillance capitalism” that gains profits from monetizing data captured through surveillance and organizes economic behavior in an “information civilization.”
The experts' views compiled by the Pew Research Center and Elon University offer several broad predictions about the algorithmic age. They predicted that algorithms will continue to spread everywhere and agreed that the benefits of computer codes can lead to greater human insights into the world, less waste, and major safety advantages. A share of respondents said data-driven approaches to problem-solving will often improve on human approaches to addressing issues because the computer codes will be refined at much greater speeds. Many predicted that algorithms will be effective tools to make up for human shortcomings.
But respondents also expressed concerns about algorithms.
They worried that humanity and human judgment are lost when data and predictive modeling become paramount. These experts argued that algorithms are primarily created in pursuit of profits and efficiencies and that this can be a threat; that algorithms can manipulate people and outcomes; that a somewhat flawed yet inescapable “logic-driven society” could emerge; that code will supplant humans in decision-making and that, in the process, humans will lose skills and specialized, local intelligence in a world where decisions are based on more homogenized algorithms; and that respect for individuals could diminish.
Just as grave a concern is that biases exist in algorithmically organized systems that could worsen social divisions. Many in the expert sampling said that algorithms reflect the biases of programmers and that the data sets they use are often limited, deficient, or incorrect. This can deepen societal divides. Those who are disadvantaged could be even more so in an algorithm-organized future, especially if algorithms are shaped by corporate data collectors. That could limit people’s exposure to a wider range of ideas and eliminate serendipitous encounters with information.
A new relationship with machines and complementary intelligence
As data and algorithms permeate daily life, people will have to renegotiate the way they use and think about machines, which now are in a state of accelerating learning. Many experts see a new equilibrium emerging as people take advantage of artificial intelligence that can be consulted in an instant, context-aware gadgets that “read” a situation and assemble relevant information, robotic devices that serve their needs, smart assistants or bots (possibly in the form of holograms) that help people navigate the world or help represent them to others, and device-based enhancements to their bodies and brains. “Basically, it is the Metaverse from Snow Crash,” predicts futurist Stowe Boyd, referring to Neal Stephenson’s sci-fi vision of a world where people and their avatars seamlessly interact with other people, their avatars, and independent artificial intelligence agents developed by third parties, including corporations.
Even if it does not fully reach that state, there will be a great re-sorting of the roles people play in the world and the functions machines assume. Now that IBM’s supercomputer Watson has beaten the world’s best chess and “Jeopardy” players, and Google’s AI system has vanquished the world’s Go champion, there is strong incentive to bring these masterful machines into hospital operating rooms and have them help assess radiology readouts; to outsource them to stock trading and insurance risk analysts; to use them in self-driving cars and drones; to let them aid people’s capacity to move around smart homes and smart cities.
Much greater movement is occurring in technology innovation than in social innovation...
The creation and application of all this knowledge has vast implications for basic human activity—starting with cognition. The very act of thinking is already undergoing significant change as people learn how to tap into all this information and cope with processing it. That impact will expand in the future. The quality of “being” will change as people are able to be “with” each other via lifelike telepresence. People’s capacities are likely to expand as digital devices, prostheses, and brain-enhancing chips become available. Human behavior itself could change as an overlay of data gives people enhanced situational and self-awareness. The way people allocate their time and attention will be restructured as options proliferate. For instance, the manner in which they spend their leisure time is likely to be radically recast as people are able to amuse themselves in compelling new virtual worlds and enrich themselves with vivid new learning experiences.
Greater innovation in social norms, collective action, credentials, and laws
With so much upheaval ahead, people, groups, and organizations will be forced to adjust. At the level of social norms, it is easy to envision social environments in which people must constantly negotiate what information can be shared, what kinds of interruptions are tolerable, what balance of fact-checking and gossip is acceptable, and what personal multitasking is harmful. In other words, much of what constitutes civil behavior will be up for grabs.
At a more formal level, some primary aspects of collective action and power are already altered as social networks become a societal force, both as pathways of knowledge sharing and as mechanisms for mobilizing others to do something. There are new ways for people to collaborate and solve problems. Moreover, there are a growing number of group structures that address problems ranging from microniche matters (my neighbors and I respond to a local issue) to macroglobal wicked problems (multinational alliances tackle climate change and pandemics).
Shifts in labor markets in the knowledge economy, which are constantly pressing workers to acquire new skills, will probably refashion some of the features of higher education and prompt change in work-related training efforts. Fully 87 percent of current U.S. workers believe it will be important or essential for them to pursue new skills during their work lives. Not many believe the existing certification and licensing systems are up to that job. A notable number of experts in another Pew Research Center-Elon University canvassing are convinced that the training system will begin breaking into several parts: one that specializes in basic work preparation education to coach students in lifelong learning strategies; another that upgrades the capacity of workers inside their existing fields; and yet another that is more designed to handle the elaborate work of schooling those whose skills are obsolete.
At the most structured level, new laws and court battles are inevitable. They are likely to address questions such as: Who owns what information and can use it and profit from it? When something goes wrong with an information-processing system (say, a self-driving car propels itself off a bridge), who is responsible? Where is the right place to draw the line between data capture—that is, surveillance—and privacy? Can a certain level of privacy be maintained as an equal right for all, or is it no longer possible? What kinds of personal information are legitimate to consider in assessing someone’s employment, creditworthiness, or insurance status? Where should libel laws apply in an age when everyone can be a “publisher” or “broadcaster” via social media and when people’s reputations can rise and fall depending on the tone of a tweet? Can information transparency regimes be applied to those who amass data and create profiles from it? Who’s overseeing the algorithms that will be making so many decisions about what happens in society? (Several experts in the Pew Research Center canvassing called for new governmental regulations relating to the development and deployment of algorithms.) Which entities should define what is appropriate out-of-bounds speech for a community, a culture, a nation?
The information revolution in the digital age is magnitudes faster than those of previous ages. Much greater movement is occurring in technology innovation than in social innovation—and this potentially dangerous gap seems to be expanding. As we grapple with this, it would be useful to keep in mind the Enlightenment sensibility of Thomas Jefferson. He wrote in 1816: “Laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths disclosed, and manners and opinions change with the change of circumstances, institutions must advance also, and keep pace with the times.”
We are likely to have to depend on our machines to help us figure out how to avoid being crushed by this avalanche.