Four years ago, I wrote a post reflecting on my 50 years in IT, and the pursuit of value from the use of IT (you can read the full post here). I described the changes that had occurred over that time since I started my working life as a computer operator on an IBM 1401, which had a (not really published as such) processing speed close to 10 million times slower than today’s microprocessors, 8k of storage (later upgraded, with an additional unit, to 16k), no solid state/hard drive, displays or communication capability, and no operating system (that was me!). Weighing in at around 4 tons, it needed a fully air conditioned room, with a raised floor, approximately twice the size of my living room.

I described how my world in 2013 compared with that time, and went on to discuss how, at the enterprise level, technology, and how it was being used, was continuing to change, at an ever-increasing rate, with the technology model changing from computing – the technology in and of itself, to consumption – how individuals and organizations use technology in ways that can create value for them and, in the case of organizations, their stakeholders.

I closed by lamenting that, then,15 years since The Information Paradox, which described the challenge of getting value from so called “IT projects”, was first published, the track record remained dismal, and realizing the value promised by IT remained elusive. 

Fast forward to today

Today, 4 years later (a lifetime in the digital world), the primary factors contributing to the elusiveness of realizing value from IT remain little changed, namely:

  • a continued, often blind focus on the technology itself, rather than the change – increasingly significant and complex change – that technology both shapes and enables;
  • the unwillingness of business leaders to get engaged in, and take ownership of this change – electing to abdicate their accountability to the IT function; and
  • failure to inclusively and continually involve the stakeholders affected by the change, without whose understanding and “buy in” failure is pretty much a foregone

The challenge of creating and sustaining value from our use of technology described above is still real. However, the almost total failure of leadership – technical, business and government leadership, to understand the extent, and implications of the change that technology is enabling in the evolving digital age is leading us, by default, into an increasingly dark place – one that I think few of us saw coming, certainly not unfolding as it is. I call this place “the Dark Side of Digital”. I alluded to it in 2013 in discussing IOT, robotics and algorithmic computing, when I said that they brought with them “unprecedented challenges in security, data privacy, safety, governance and trust…(and) have considerable potential to change the nature of work” – I would now revise and add to the latter saying “…have considerable potential to fundamentally impact the future of work and, indeed, the future of society”.

The elements of this dark side fall into three main categories:

  1. Cybersecurity: This is the most traditional category – one that, albeit not so-named, has been with us since the advent of computers, when cards, tapes or other media could be lost/stolen. However, as our connectedness continues to increase, so does our susceptibility to cybersecurity attacks, with a growing number of such threats arising out of machine-to-machine learning and the Internet of Things. There are nearly 7 billion connected devices being used this year, but this is expected to jump to a whopping 20 billion over the next four years. Most cybercriminals are now operating with increasing levels of skill and professionalism. As a result, the adverse effects of cyber-breaches, -hacks, or –attacks, including the use of ransomware and phishing continue to escalate resulting in increased physical loss and theft of media, eroding competitive advantage and shareholder value, and severely damaging reputations. More severe attacks have the capacity to disrupt regular business operations and governmental functions severely. Such incidents may result in the temporary outage of critical services and the compromise of sensitive data. In the case of nation-state supported actors, their attacks have the potential to cause complete paralysis and/or destruction of critical systems and infrastructure. Such attacks have the capacity to result in significant destruction of property and/or loss of life. Under such circumstances, regular business operations and/or government functions cease and data confidentiality, integrity, and availability are completely compromised for extended
  1. The Future of Work: The fear that technology will eliminate jobs has been with us pretty much since the advent of the first commercial computers, but, until the last few years, the argument that new jobs will appear to replace the old has largely held true. Now however, the revolutionary pace and breadth of technological change is such that we are experiencing a situation in which, as recently described by the Governor of the Bank of England, Mark Carney.

“Alongside great benefits, every technological revolution mercilessly destroys jobs & livelihoods well before new ones emerge.”

 Early AI and IOT systems are already augmenting human work and changing management structures across labor sectors. We are already seeing, and can expect to continue to see uneven distribution of the of AI impact across sectors, job types, wage levels, skills and education. It’s very hard to predict which jobs will be most affected by AI-driven automation.

While, traditionally, low-skill jobs have been at the greatest risk of replacement from automation, as Stephen Hawking says, the “rise of artificial intelligence is likely to extend job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.” He goes on to say that “we are at the most dangerous moment in the development of humanity”.

  1. The Future of Society: On the societal front, a paradigm shift is underway in how we work and communicate, as well as how we express, inform and entertain ourselves. Equally, governments and institutions are being reshaped, as are systems of education, healthcare and transportation, among many AI and automated decision making systems are often deployed as a background process, unknown and unseen by those they impact. Even when they are seen, they may provide assessments and guide decisions without being fully understood or evaluated. Visible or not, as AI systems proliferate through social domains, there are few established means to validate AI systems’ fairness, and to contest and rectify wrong or harmful decisions or impacts. Professional codes of ethics, where they exist, don’t currently reflect the social and economic complexities of deploying AI systems within critical social domains like healthcare, law enforcement, criminal justice, and labor. Similarly, technical curricula at major universities, while recently emphasizing ethics, rarely integrate these principles into their core training at a practical level1. As Mike Ananny and Taylor Owen said in a recent Globe and Mail article2 , there is “a troubling disconnect between the rapid development of AI technologies and the static nature of our governance institutions. It is difficult to imagine how governments will regulate the social implications of an AI that adapts in real time, based on flows of data that technologists don’t foresee or understand. It is equally challenging for governments to design safeguards that anticipate human-machine action, and that can trace consequences across multiple systems, data-sets, and institutions.” This disconnect is further adding to the erosion of trust in our institutions that we have been seeing over several decades, and the resulting resurgence of populism.

Adding to the threats to society is the proliferation of internet and social media. In a world where we can all be publishers, we see shades of Orwell’s 1984 in a post-truth word of alternate facts, and fake news. Rather than becoming a more open and collaborative society, we see society fracturing into siloed echo-chambers of alternate-reality, built on confirmation bias, and fed by self-serving populist leaders, posing dangerously simplistic solutions – sometimes in tweets of 140 characters or less – to poorly understood and increasingly complex issues.

So, what do we need to do?

The complexity of these challenges, and their interconnectedness across sectors make it a critical responsibility of all stakeholders of global society – governments, business, academia, and civil society – to work together to better understand the emerging trends.

If business leaders expect to harness the latest technology advances to the benefit of their customers, business and society at large, there are two primary challenges they need to address now.

  1. As companies amass vast amounts of personal data used to develop products and services, they must own the responsibility for the ethical use and security of that information. Ethical and security guidelines for how data is collected, controlled and ultimately used are of paramount concern to customers, and rightfully so. To gain the trust of customers, companies must be transparent and prove they employ strong ethical guidelines and security standards.
  1. It is incumbent on organizations to act responsibly toward their employees and make it possible for them to succeed in the rapidly changing work environment. That means clearly defining the company vision and strategies, enabling shifting roles through specialized training, and redefining processes to empower people to innovate and implement new ways of doing business to successfully navigate this new and ever-changing

As a society, if we are to avoid sleepwalking into a dystopian future, as described in 2013 by internet pioneer Nico Mele as one “inconsistent with the hard-won democratic values on which our modern society is based… a chaotic, uncontrollable, and potentially even catastrophic future”, we must recognize that technology is not destiny – institutions and policies are critical. Policy plays a large role in shaping the direction and effects of technological change. “Given appropriate attention and the right policy and institutional responses, advanced automation can be compatible with productivity, high levels of employment, and more broadly shared prosperity.”

The challenge is eloquently described by WEF founder and executive chairman, Dr. Klaus Schwab.

“Shaping the fourth industrial revolution to ensure that it is empowering and human- centred, rather than divisive and dehumanizing, is not a task for any single stakeholder or sector or for any one region, industry or culture. The fundamental and global nature of this revolution means it will affect and be influenced by all countries, economies, sectors and people. It is, therefore, critical that we invest attention and energy in multi- stakeholder cooperation across academic, social, political, national and industry boundaries. These interactions and collaborations are needed to create positive, common and hope- filled narratives, enabling individuals and groups from all parts of the world to participate in, and benefit from, the ongoing transformations.”

A call to action!

We need, as Dr. Schwab goes on to say, to “…take dramatic technological change as an invitation to reflect about who we are and how we see the world. The more we think about how to harness the technology revolution, the more we will examine ourselves and the underlying social models that these technologies embody and enable, and the more we will have an opportunity to shape the revolution in a manner that improves the state of the world.”3

 We cannot wait for “them” to do this – as individuals, we can and must all play a leadership role as advocates in our organizations and communities to increase the awareness and understanding of the changes ahead, and to shape those changes such that, as Dr. Schwab says, they are empowering and human-centred, rather than divisive and dehumanizing.

Sources:

1 Source: The AI Now Report, The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term, A summary of the AI Now public symposium, hosted by the White House and New York University’s Information Law Institute, July 7th, 20

2 Ethics and governance are getting lost in the AI frenzy, The Globe and Mail, March 20, 1017

3 Source: The Fourth Industrial Revolution,

About the Author