Skip to main content Skip to footer

RESEARCH REPORT

Technology Vision 2024

Human by design

15-minute read

January 9, 2024

In brief

  • Technology is becoming human by design, and enterprises that prepare now will win in the future.
  • How people access and interact with information is radically changing as human-like, AI-powered chatbots synthesize vast amounts of information and provide answers and advice.
  • AI is starting to reason like us, and will soon comprise entire ecosystems of AI agents who will work with one another and act for people and organizations alike.
  • A new spatial computing medium is emerging, letting the digital world reflect what it means to be human and in a physical space.
  • The challenge of tech not understanding us and our intent is disappearing: machines are getting much better at interacting with humans on their level.

Human by design: How AI unleashes the next level of human potential

It’s time to make technology human by design.

This is a moment for reinvention. In the coming years, businesses will have an increasingly powerful array of technologies at their disposal that will open new pathways to unleash greater human potential, productivity, and creativity. Early adopters and leading businesses have kick-started a race toward a new era of value and capability. And their strategies are underpinned by one common thread – the technology is becoming more human.

It sounds counterintuitive: after all, wasn’t technology built by, and for, humans? Creating tools that expand our physical and cognitive abilities is so unique to humanity that some argue it defines us as a species.

Despite this, the tools we build are often distinctly unhuman, filling gaps by doing and being what we couldn’t, and in the process radically transforming our lives. Automobiles expanded our freedom of mobility. Cranes let us build skyscrapers and bridges. Machines helped us create, distribute, and listen to music.

Technology’s unhuman nature can also be its drawback. Extended use of hand tools can lead to arthritis. Years of looking at screens can accelerate vision problems. We have amazing navigational tools, but they still distract us from driving.  Granted, there have been efforts to create tools that are more ergonomic or easier to use.  But even so, time and again we see and make decisions about our lives based on what is best for a machine rather than optimizing human potential.

Now, for the first time in history, there’s strong evidence to indicate that we are reversing course—not by moving away from technology, but rather by embracing a generation of technology that is more human. Technology that is more intuitive, both in design and its very nature, demonstrates more human-like intelligence, and is easy to integrate across every aspect of our lives.

Generative AI has the potential to impact much more than just the task at hand. It’s also starting to profoundly reshape organizations and markets.

Consider the impact generative AI and transformer models are having on the world around us. What began as chatbots like ChatGPT and Bard has become a driving force in making technology more intuitive, intelligent and accessible to all. Where AI once focused on automation and routine tasks, it’s now shifting to augmentation, changing how people approach work, and is rapidly democratizing the technologies and specialized knowledge work that were once reserved for the highly trained or deep pocketed.

Generative AI has the potential to impact much more than just the task at hand. It’s also starting to profoundly reshape organizations and markets.

Of course, the advent of more human technology isn’t limited to AI: It’s starting to address many of the pain points that exist between us and technology, paving the way for greater human potential.

Technology that is human by design will reach new people and expand access to knowledge, which will enable ongoing innovation. Think of all the people historically alienated by technology who will be able to contribute to the digital revolution. As technology becomes more intuitive, we can tap into these people as new customers and new employees.

Make It Human—the 2024 Trends

A match made in AI

People are asking generative AI chatbots for information – transforming the business of search today, and the futures of software and data-driven enterprises tomorrow.

Meet my agent

AI is taking action, and soon whole ecosystems of AI agents could command major aspects of business. Appropriate human guidance and oversight is critical.

The space we need

The spatial computing technology landscape is rapidly growing, but to successfully capitalize on this new medium, enterprises will need to find its killer apps.

Our bodies electronic

A suite of technologies – from eye-tracking to machine learning to BCI – are starting to understand people more deeply, and in more human-centric ways.

95%

of executives agree that making technology more human will massively expand the opportunities of every industry.

Leaders will face familiar questions: Which products and services are ripe for scaling? What new data is at your disposal? What transformative actions can you take? But they will also be at the center of answering questions they may have never expected: What kind of oversight does AI need? Who will be included in the digital transformation? What responsibilities do we have to the people in our ecosystem?

Human by design is not just a description of features, it’s a mandate for what comes next. As enterprises look to reinvent their digital core, human technology will become central to the success of their efforts. Every business is beginning to see the potential emerging technologies have to reinvent the pillars of their digital efforts. Digital experiences, data and analytics, products, all stand to change as technologies like generative AI, spatial computing, and others mature and scale.

In this moment of reinvention, enterprises have the chance to build a strategy that maximizes human potential, and erases the friction between people and technology. The future will be powered by artificial intelligence but must be designed for human intelligence. And as a new generation of technology gives enterprises the power to do more, every choice they make matters that much more too. The world is watching. Will you be a role model or a cautionary tale?

93%

of executives agree that with rapid technological advancements, it is more important than ever for organizations to innovate with purpose.

Trend 1 - A match made in AI: Reshaping our relationship with knowledge

The big picture

Our relationship with data is changing—and with it, how we think, work, and interact with technology.  The entire basis of the digital enterprise is getting disrupted.

The search-based “librarian” model of human-data interaction is giving way to a new “advisor” model. Rather than running searches to curate results, people are now asking generative AI chatbots for answers. Case in point: OpenAI launched ChatGPT in November 2022, and it became the fastest-growing app of all time.  Large language models (LLMs) had been around for years, but ChatGPT’s ability to answer questions in a direct and conversational manner made a huge difference.

Data is one of the most important factors shaping today’s digital businesses. And the new chatbots—that can synthesize vast amounts of information to provide answers and advice, use different data modalities, remember prior conversations, and even suggest what to ask next—are disrupting that undercurrent. Ultimately, these chatbots can operate as LLM-advisors, allowing companies to put one with the breadth of enterprise knowledge at every employee’s fingertips. This could unlock the latent value of data and finally let enterprises tap into the promise of data-driven business.

With generative AI, a digital butler is finally in the cards.

Companies possess valuable, unique information they want customers, employees, partners, and investors to find and use. But whether it’s because we don’t recall the right search terms, we can’t write the query, the data is siloed, or the documents are too dense, a lot of that information is hard to access or distill. For the data-driven business of today, that’s serious untapped value that generative AI could bring in.

However, the true disruption here isn’t just in how we access data; it’s in the potential to transform the entire software market. What if the interface to every app and digital platform became a generative AI chatbot? What if that became the way we read, write, and interact with data, as a core competency of all platforms?

To truly reap the benefits of generative AI and build the data-and-AI powered enterprise of the future, businesses need to radically rethink their core technology strategy. How they gather and structure data, their broader architectures, and how they deploy technology tools and the features they include need to be rethought. And new practices like training, debiasing, and AI-oversight must be built in from the start.

95%

of executives believe generative AI will compel their organization to modernize its technology architecture.

Shoring up your data foundation

New technologies and techniques can help enterprises shore up their data foundation and prepare for the future of data-driven business. Wherever companies start from, LLM-advisors will demand a data foundation that’s more accessible and contextual than ever.

The knowledge graph is one of the most important technologies here. It’s a graph-structured data model including entities and the relationships between them, which encodes greater context and meaning. Not only can a knowledge graph aggregate information from more sources and support better personalization, but it can also enhance data access through semantic search.

In addition to knowledge graphs, data mesh and data fabric are two ways to help map and organize information that businesses should consider as they update their overall architecture.

Exploring LLMs as your new data interface

On their own, knowledge graphs, data mesh, and data fabric would be a huge step up for enterprise knowledge management systems. But there’s much value to be gained in taking the next step and shifting from the librarian to advisor model. Imagine if instead of using a search bar, employees could ask questions in natural language and get clear answers across every website and app in the enterprise. With an accessible and contextual data foundation, enterprises can start to build this—and there are a few options.

First, companies can train their own LLM from scratch, though this is rare given the significant resources required. A second option is to “fine-tune” an existing LLM. Essentially, this means taking a more general LLM and adapting it to a domain by further training it on a set of domain-specific documents. This option is best for domain-specific cases when real-time information is not necessary, like for creative outputs in design or marketing.

Enterprises are also beginning to fine-tune smaller language models (SLMs) for specialized use cases. These SLMs are more efficient, running at lower cost with smaller carbon footprints, and can be trained more quickly and used on smaller, edge devices.

Lastly, one of the most popular approaches to building an LLM-advisor has been to “ground” pre-trained LLMs by providing them with more relevant, use case-specific information, typically through retrieval augmented generation (RAG).

The field of generative AI and LLMs is moving fast, but whatever way you choose to explore, one thing will stay constant: your data foundation needs to be solid and contextual, or your LLM-advisor will never live up to its promise.

 

Understanding and mitigating risks

First and most importantly, as businesses begin to explore the new possibilities LLM-advisors bring, they need to understand the associated risks.

Take “hallucinations,” an almost intrinsic characteristic of LLMs. Because they are trained to deliver probabilistic answers with a high degree of certainty, there are times when these advisors confidently relay incorrect information. And while hallucinations are perhaps LLMs’ most notorious risk, other issues must be considered. If using a public model, proprietary data must be carefully protected so that it cannot be leaked. And for private models too, data cannot be shared with employees who should not have access. The cost of computing is something that needs to be managed. And underlying everything, few people have the relevant expertise to implement these solutions well.

All that said, these challenges shouldn’t be taken as a deterrent, but rather as a call to implement the technology with appropriate controls.

The data going into the LLM—whether through training or the prompt—should be high quality data: fresh, well-labeled, and unbiased. Training data should be zero-party and proactively shared by customers, or first-party and collected directly by the company. And security standards should be implemented to protect any personal or proprietary data. Finally, data permissions must also be in place to ensure that the user is allowed to access any data retrieved for in-context learning.

Beyond accuracy, the outputs of the generative AI chatbot should also be explainable and align with the brand. Guardrails can be put in place so that the model does not respond with sensitive data or harmful words, and so that it declines questions outside its scope. Moreover, responses can convey uncertainty and provide sources for verification.

Finally, generative AI chatbots should be subject to continuous testing and human oversight. Companies should invest in ethical AI and develop minimum standards to adhere to. And they should gather regular feedback and provide training for employees as well.

Among the many other security implications already discussed in this trend, companies should also think about how LLM-advisors may change user data dynamics.

We have an opportunity to reinvent the ethos of search and restore trust between businesses and their customers. Companies can now act as stewards of their own information—storing, securing, analyzing, and disseminating their data and institutional knowledge directly to customers through digital advisors. This is a big responsibility: your company must ensure that your data remains secure while yielding high-confidence responses in your advisory services. It’s an even bigger opportunity: without search providers mediating the exchange of information, companies can serve as a direct source of reliable insight and win back their customers’ trust.

Conclusion

Generative AI is the ultimate game-changer for data and software. LLMs are changing our relationship with information, and everything from how enterprises reach customers to how they empower employees and partners stands to transform. Leading companies are already diving in, imagining and building the next generation of data-driven business. And before long, it won’t just be leaders. It’ll be the new way digital business works.

Trend 2 - Meet my agent: Ecosystems for AI

The big picture

AI is breaking out of its limited scope of assistance to engage more and more of the world through action. Over the next decade, we will see the rise of entire agent ecosystems—large networks of interconnected AI that will push enterprises to think about their intelligence and automation strategy in a fundamentally different way.

Today, most AI strategies are narrowly focused on assisting in task and function. To the extent that AI acts, it is as solitary actors, rather than an ecosystem of interdependent parts. But as AI evolves into agents, automated systems will make decisions and take actions on their own. Agents won’t just advise humans, they will act on humans’ behalf. AI will keep generating text, images, and insights, but agents will decide for themselves what to do with it.

As agents are promoted to become our colleagues and our proxies, we will need to reimagine the future of tech and talent together.

While this agent evolution is just getting underway, companies already need to start thinking about what’s next. Because if agents are starting to act, it won’t be long until they start interacting with each other. Tomorrow’s AI strategy will require the orchestration of an entire concert of actors: narrowly-trained AI, generalized agents, agents tuned for human collaboration, and agents designed for machine optimization.

But there’s a lot of work to do before AI agents can truly act on our behalf, or as our proxy. And still more before they can act in concert with each other. The fact is, agents are still getting stuck, misusing tools, and generating inaccurate responses—and these are errors that can compound quickly.

Humans and machines have been paired at the task-level, but leaders have never prepared for AI to operate our businesses—until today. As agents are promoted to become our colleagues and our proxies, we will need to reimagine the future of tech and talent together. It’s not just about new skills, it’s about ensuring that agents share our values and goals. Agents will help build our future world, and it’s our job to make sure it’s one we want to live in.

96%

of executives agree leveraging AI agent ecosystems will be a significant opportunity for their organizations in the next 3 years.

As AI assistants mature into proxies that can act on behalf of humans, the resulting business opportunities will depend on three core capabilities: access to real time data and services; reasoning through complex chains of thought; and the creation of tools—not for human use, but for the use of the agents themselves.

Starting with access to real time data and services: When ChatGPT first launched, a common mistake people made was thinking the application was actively looking up information on the web. In reality, GPT-3.5 (the LLM upon which ChatGPT was initially launched) was trained on an extremely wide corpus of knowledge and drew on the relationships between that data to provide answers.

But new plugins to enable ChatGPT to access the internet were soon announced that could transform foundation models from powerful engines working in isolation to agents with the ability to navigate the current digital world. While plugins have powerful innovative potential on their own, they’ll also play a critical role in the emergence of agent ecosystems.

The second step in the agent evolution is the ability to reason and think logically—because even the simplest everyday actions for people require a series of complex instructions for machines. AI research is starting to break down barriers to machine reasoning. Chain-of-thought prompting is an approach developed to help LLMs better understand steps in a complex task.

Between chain-of-thought reasoning and plugins, AI has the potential to take on complex tasks by using both tighter logic and the abundance of digital tools available on the web. But what happens if the required solution isn’t yet available?

When humans face this challenge, we acquire or build the tools we need. AI used to rely on humans exclusively to grow its capabilities. But the third dimension of agency we are seeing emerge is the ability for AI to develop tools for itself.

The agent ecosystem may seem overwhelming. After all, beyond the three core capabilities of autonomous agents, we’re also talking about an incredibly complex orchestration challenge, and a massive reinvention of your human workforce to make it all possible. It’s enough to leave leaders wondering where to start. The good news is existing digital transformation efforts will go a long way to giving enterprises a leg up.

What happens when the agent ecosystem gets to work? Whether as our assistants or as our proxies, the result will be explosive productivity, innovation and the revamping of the human workforce. As assistants or copilots, agents could dramatically multiply the output of individual employees. In other scenarios, we will increasingly trust agents to act on our behalf. As our proxies, they could tackle jobs currently performed by humans, but with a giant advantage—a single agent could wield all of your company’s knowledge and information.

Businesses will need to think about the human and technological approaches they need to support these agents. From a technology side, a major consideration will be how these entities identify themselves. And the impacts on human workers—their new responsibilities, roles, and functions—demand even deeper attention. To be clear, humans aren’t going anywhere. Humans will make and enforce the rules for agents.

Rethinking human talent

In the era of agent ecosystems, your most valuable employees will be those best equipped to set the guidelines for agents. A company’s level of trust in their autonomous agents will determine the value those agents can create, and your human talent is responsible for building that trust.

But agents also need to understand their limits. When does an agent have enough information to act alone, and when should it seek support before taking action? Humans will decide how much independence to afford their autonomous systems.

What companies can do now

What can you do now to set your human and agent workforce up for success? Give agents a chance to learn about your company, and give your company a chance to learn about agents.

Companies can start by weaving the connective fabric between agents’ predecessors, LLMs, and their support systems. By fine-tuning LLMs on your company’s information, you are giving foundation models a head-start at developing expertise.

It's also time to introduce humans to their future digital co-workers. Companies can lay the foundation for trust with future agents by teaching their workforce to reason with existing intelligent technologies. Challenge your employees to discover and transcend the limits of existing autonomous systems.

Finally, let there be no ambiguity about your company’s North Star. Every action your agents take will need to be traced back to your core values and a mission, so it is never too early to operationalize your values from the top to the bottom of your organization.

From a security standpoint, agent ecosystems will need to provide transparency into their processes and decisions. Consider the growing recognition of the need for a software bill of materials – a clear list of all the code components and dependencies that make up a software application – so as to let companies and agencies under the hood. Similarly, an agent bill of materials could help explain and track agent decision-making.

What logic did the agent follow to make a decision? Which agent made the call? What code was written? What data was used and with whom was that data shared? The better we can trace and understand agent decision-making processes, the more we can trust agents to act on our behalf.

Conclusion

Agent ecosystems have the potential to multiply enterprise productivity and innovation to a level that humans can hardly comprehend. But they will only be as valuable as the humans that guide them; human knowledge and reasoning will give one network of agents the edge over another. Today, artificial intelligence is a tool. In the future, AI agents will operate our companies. It is our job to make sure they don’t run amok. Given the pace of AI evolution, the time to start onboarding your agents is now.

Trend 3 - The space we need: Creating value in new realities

The big picture

Spatial computing is about to change not just the course of technology innovation, but also the ways people work and live. Whereas desktop and mobile used screens as portals to the digital world, spatial will finally combine our disparate realities, fusing digital and physical together. Apps built for this medium will let people immerse themselves in digital worlds with a physical sense of space, or layer content on top of their physical surroundings.

So, why doesn’t it feel like we’re at the beginning of a new technology era? Why are we inundated instead with talk of a “metaverse slump”? The metaverse is one of the best-known applications of spatial computing. But just look at the price of digital real estate, booming in 2021 and 2022, and down 80-90% in 2023.

Spatial computing is about to change the course of technology innovation and the ways people work and live.

Some enterprises are holding off, content to say metaverse hype outpaced technology maturity. But others are racing ahead, building the technology capabilities. Meta has been rapidly developing VR and AR products, and introduced Codex Avatars, which use AI and smartphone cameras to create photorealistic avatars. Epic’s RealityScan App lets people scan 3D objects in the physical world with just their phone and turn them into 3D virtual assets.

Underlying it all, advancing technologies like generative AI continue to make it faster and cheaper to build spatial environments and experiences. And, perhaps quietly, these technologies are already being proven out in industrial applications. Digital twins for manufacturing, the growth of VR/AR in training and remote operation, and the establishment of collaborative design environments are all already having practical – and valuable – impacts on industry.

The truth is that new mediums don’t come very often, and when they do, the uptake is slow. But the payoff for diving in early is nearly immeasurable.

92%

of executives agree their organization plans to create a competitive advantage leveraging spatial computing.

Critically, new standards, tools and technologies are making it easier—and cheaper—to build spatial apps and experiences that feel familiar.

Think about the websites you frequent or your favorite apps on your phone. Even if their purposes are different, something feels undeniably universal across even the most disparate experiences. Why? They all used the same foundation.

For a long time, spatial never had such a foundation. Enter Universal Scene Description (USD), or what can best be described as a file format for 3D spaces. Developed by Pixar, USD is a framework that lets creators map out aspects of a scene, including specific assets and backgrounds, lighting, characters and more. Since USD is designed around bringing these assets together in a scene, different software can be used across each one, enabling collaborative content building and non-destructive editing. USD is quickly becoming central to the most impactful spatial applications, notably within industrial digital twins.

Enterprises need to understand they will not be operating spaces in isolation. Just as no webpage or app exists on the internet alone, the next iteration of the web promises to bring these parallel experiences even closer together.

Sense of place

One emerging capability that differentiates spatial computing from its digital counterparts is engaging our senses. New technologies are letting engineers design experiences that address all types of senses, like touch, smell and sound.

In past VR attempts, adding haptics, or touch, could be bulky or underwhelming. But University of Chicago researchers recently proposed using electrodes to better mimic touch. 

Scents can make digital spaces lifelike, too, by evoking memories or triggering the all-important fight-or-flight response. Scentient, a company trying to bring olfactory senses to the metaverse, have been experimenting with the technology for training firefighters and emergency responders, where smells, like the presence of natural gas, can be critical for evaluating an emergency.

Of course, sound, or spatial audio, is also critical to realistic digital scene-building. Lastly, immersive spatial apps will need to respond to how we naturally move.

Spatial computing is not coming to replace desktop or mobile computing, but it is becoming an important piece of the computing fabric that makes up enterprise IT strategy.

We’ve already seen the early stages. Digital twins make more sense when you walk through them. Training is more impactful when you can live the experience rather than watch a video. While these were often standalone pilots, a careful consideration of the unique advantages of spatial computing can help shape and guide enterprise strategy. The market is still maturing, but it is quickly becoming clear that spatial apps thrive when applied in three ways: conveying large volumes of complex information; giving users agency over their experience; and, perhaps counterintuitively, allowing us to augment physical spaces.

When it comes to conveying complex information, the advantage of the spatial medium over the alternatives is probably clearest. Since a space can let users move and act naturally, information can be conveyed in more dynamic, immersive ways. We’ve already seen it in action. Some of the earliest examples of successful spatial apps were industrial digital twins, virtual training scenarios, or real-time remote assistance.

The second advantage spatial has over older mediums is the ability to give users agency to shape their in-app experiences. Because spatial computing lets us build digital experiences that embody a physical sense of space, we can design experiences that give users more flexibility to move and explore.

Lastly, spatial applications bring advantages to physical spaces; they can augment, enhance, and extend physical places without materially changing them. Imagine a future office where physical monitors, projectors, and displays are replaced by spatial computers and apps. People will have the flexibility to design simpler spaces, lowering overhead costs, and to change their surroundings more easily.

As the working world goes spatial, businesses will also need to think about security. There will be more devices than ever – employees will use spatial devices for work, customers will use them to access experiences. And with this ever expanding device ecosystem there will be more entry points for attackers too. So how do you put borders on the borderless? Businesses’ spatial strategies will need to be designed with zero trust principles.

Additionally, businesses should recognize that spatial is unfamiliar territory, so both vendors and users should expect to have blind spots. One line of defense won’t be enough, but Defense in Depth strategies that leverage multiple layers of security (like administrative, technical, and physical) can be deployed to defend this new frontier.

Conclusion

Spatial computing is about to hit its stride, and the race is on for leaders to get ahead. To position themselves at the top of the next era of technology innovation, enterprise leaders will need to rethink their position on spatial and recognize the effect recent technology advances are about to have. New computing mediums are few and far between, and they can have immeasurable impact on businesses and people for decades. Are you ready to immerse yourself in the moment?

Trend 4 - Our bodies electronic: A new human interface

The big picture

Failing to understand people is a limiting factor for many of technologies we use today. Just think about the robots and drones that humans can only control if we translate what we want into commands they recognize. The fact is, when tech struggles to connect with us, it’s often because people—what they want, expect, or intend—are an enigma.

Now, innovators are trying to change that. Across industries, they’re building technologies and systems that can understand people in new and deeper ways. They’re creating a “human interface,” and the ripple effect will go far beyond, say, improving smart homes.

Look at how neurotech is beginning to connect with people’s minds. Recently, two separate studies from researchers at the University of California San Francisco and Stanford University demonstrated using neural prostheses—like brain-computer interfaces (BCI)—to decode speech from neural data. This could help patients with verbal disabilities “talk” by translating attempted speech into text or generated voices.

When technologies can better understand us—our behavior and our intentions—they will more effectively adapt to us.

Or consider technologies that read body movement, like eye and hand tracking. In 2023, Apple’s Vision Pro introduced visionOS, which lets users navigate and click with just their gaze and a simple gesture, bypassing the need for a handheld controller.

Innovations like these are rewriting the rules and pushing the limits that have guided human-machine interaction for decades. So often today, we bend over backwards, adapting and changing what we do to make technologies work. But the “human interface” will turn that on its head; when technologies can better understand us—our behavior and our intentions—they will more effectively adapt to us.

To succeed, enterprises will also need to address growing issues around trust and technology misuse. Companies and individuals alike may balk at the idea of letting technology read and understand us in these new and more intimate ways. Biometric privacy standards will need to be updated. And new neuroethics safeguards will need to be defined, including how to appropriately handle brain and other biometric data that can be used to infer people’s intentions and cognitive states. Until formal regulations catch up, it’s on enterprises’ shoulders to earn people’s trust.

31%

of consumers agree they are often frustrated that technology fails to understand them and their intentions accurately.

Attempting to understand people—as individuals, target groups or populations—is a centuries’ old business challenge. And in recent decades, using digital technology to do this has been the ultimate differentiator. Digital platforms and devices have let businesses track and quantify people’s behaviors with enormously valuable impact. Now, the “human interface” is changing the game again, making it possible to understand people in deeper, more human-centric ways.

Recent technologies used to understand people have been based on tracking and observing patterns that lack specificity. People may read or watch familiar content, but they may actually want something new. We’re very good at recognizing what people do, but we don’t always understand why they do it.

How the “human interface” measures intent

The “human interface” isn’t any one single technology. Rather, it encompasses a suite of technologies that are deepening how innovators see and make sense of people.

Some are using wearable devices to track biosignals that can help predict what people want or understand their cognitive state. Others are building more detailed ways to understand people’s intent in relation to their environments.

Another approach to human intent is through AI. Consider human-robot collaborations. People’s state of mind, like if they’re feeling ambitious or tired, can impact how they approach a task. But while humans tend to be good at understanding these states of mind, robots haven’t been. But efforts are underway to teach robots to identify these states.

Lastly, perhaps one of the most exciting “human interface” technologies is neurotech: neuro-sensing and BCI. Many new neurotech companies have appeared in the last decade, and the field holds clear potential to read and identify human intent.

Neurotech highlights the pace of “human interface” advances

Many may think neural-sensing and BCI are years away from widespread commercial use, but recent advances tell a different story.

Skeptics tend to assert that neurotech will stay limited to the healthcare industry. But new use cases are being identified by the day. Two key advances are driving this. The first is decoding brain signals. Advances in AI pattern detection, as well as greater availability of brain data, are making a big difference.

The second area to watch is neuro-hardware – specifically, the quality of external devices. Historically, EEG (electroencephalogram) and fMRI (functional magnetic resonance imaging) have been two of the most widely used external brain sensing techniques. However, until recently, capturing either type of brain signal required a lab setting. But that is starting to change.

As more enterprises start to build “human interface” strategies, they should begin by scoping out the different business areas and challenges that can be transformed.

First, consider how “human interface” technologies are raising the bar when it comes to anticipating people’s actions. Some of the most promising use cases are in areas where people and machines operate in shared spaces. For instance, enterprises could create safer and more productive manufacturing systems if robots could anticipate what people were about to do.

Another area that can be transformed is direct human-machine collaboration: how we use and control technology. As an example, think about how neurotech is letting us tap into our minds and connect with technology in new, potentially more natural ways.

Lastly, the “human interface” could drive the invention of new products and services. Brain-sensing, for instance, could help people “get” themselves better. L'Oréal is working with EMOTIV to help people better understand their fragrance preferences.

Still others are thinking about the “human interface” as a safety measure. Meili Technologies is a startup working to improve vehicle safety. It uses deep learning, visual inputs, and in-cabin sensors to detect if a driver has been incapacitated by a heart attack, seizure, stroke, or other emergency.

Business competition is changing—and trust is more important than ever

Businesses need to start assessing the risks posed by these technologies, and what new policies and safeguards need to be put in place. Rather than wait for regulations to ramp up, responsible enterprises need to begin now, looking to existing biometric laws and to the medical industry for guidance.

If tin foil hats don’t prevent mind reading, what will? More than any other trend this year, security will make or break enterprise and consumer adoption of the “human interface."

Acceptance of more perceptive and connected tools hinges on humans’ ability to be the primary gatekeepers of what information gets shared, at a minimum. This practice needs to be integrated into the design of the next generation of human-computer interface tools, letting people either opt into sharing data or telemetry relevant to the task at hand, or opt out of sharing extraneous or sensitive information.

Conclusion

The human interface is a new approach to addressing one of the oldest business challenges: understanding people as humans. That’s a big responsibility and an even bigger opportunity. People will have questions, and concerns about privacy will be the first and most important hurdle enterprises face. But the chance to understand people in this deeper, more human-centric way, is worth it.

Positive engineering: Our technology crossroads

The world is arriving at what might be technology’s biggest inflection point in history, and enterprises—and the decisions their leaders make—are at the heart of shaping how we move forward.

As we experience more growth and innovation, it won’t all be for the better. There will be more (and new) opportunities for fraud, misinformation and breaches of security. If we engineer tools with human capabilities but without human intelligence—or even human conscience—we can create in a way that deteriorates both the bottom line and the greater good.

In the era of human tech, every product and every service that enterprises bring to market holds the potential to transform lives, empower communities, and ignite change, for better or for worse. And, invariably, enterprises will face the delicate balancing act of needing to act fast versus needing to act carefully, as well as the expectation that competitors or other countries may not share the same concerns or impose the same guardrails.

As we strive to make technology human by design, we need to think of security as an enabler, an essential way to build trust between people and technology, rather than as a limitation or requirement. And we need to build technology without overshadowing or upending what it means to be human. It’s a concept we call “positive engineering.” Over the last few years, ethical questions have entered the technology domain from a number of different directions. Inclusivity, accessibility, sustainability, job security, protection of creative intellectual property, and so much more. Each of them roots back to one single question: how do we balance what we can achieve with technology with what we want as people?

This is a transformative moment for technology and people alike, and the world is ready for you to help shape it.

WRITTEN BY

Paul Daugherty

Chief Technology & Innovation Officer

Adam Burden

Global Innovation Lead

Michael Biltz

Managing Director – Accenture Technology Vision