Contact Us

Navigating the new terrain

Artificial Intelligence
Thought Leader: Jennifer Burns
April 9, 2024
Source: KPMG

Why now?

In the tail end of 2022, generative AI captured the world’s imagination. In 2023, we saw the proliferation of technology that has promise to enhance many facets of our day-to-day lives. As industry develops and adopts this technology at speed, the higher education sector is uniquely positioned to act as a guide to decision makers in this imminent transformation in our ways of working. As academic institutions shape the next generation of critical thinkers and leaders, leveraging generative AI technology could drastically change how these individuals live, learn, and collaborate.

Students know their future careers will increasingly come to rely on the blending of human and AI inputs and are keen to build literacy and fluency in tools that are relevant to their areas of study. KPMG in Canada’s Generative AI Adoption Index shows that over the last six months, the number of Canadians surveyed who reported using generative AI at work rose by 16%, representing an annualized growth of 32%. Of those surveyed, an astounding 90% said that it has enhanced the quality of their professional work. KPMG has also surveyed students aged 18+ and found that 52% of students are already using generative AI to support their studies and 87% say it has improved the quality of their work.

The generative AI adoption survey data in Canada provides a clear message: Generative AI technology is here to stay. AI models are enhancing many of the digital products and services we rely on as large players race to new value-driven offerings. Within higher education, the possible applications of generative AI span across teaching and learning, research, and core business activities such as HR, finance, IT, and academic administrative areas. Concurrently, as the wave of new technology applications laps onto the shores of business and academia, major questions around fundamental roles of humans and machines spur more philosophical questions regarding the state of our collective future.

In response to these new tools, many institutions, including University of British Columbia (UBC), are supporting cautious experimentation efforts.

Everyone is navigating this new terrain together, so guidance, support, experimentation, and reflection are going to be key.

Additionally, processes for capturing potential use cases are necessary to ensure the appropriate protections are in place and provide a mechanism to develop enterprise grade tools to support the use.

Addressing today’s AI challenges

As AI tools continue to create disruption for higher education, the narrative is shifting from one of imagination, to one with real consequences. Faculty understand the impact this is going to have on their curricula and approaches to teaching, but their optimism for the possibilities these tools will bring is tempered by some of the well-publicized challenges and limitations around bias, transparency in the model, privacy, and data security. According to  Jennifer Burns, the Chief Information Officer of UBC, another key concern for academia is protecting student data and personal information.

“In BC, if a student must use a commercial tool, that identifies them, then there must be an ability to opt out, or anonymize their identity. There’s also an issue with retaining data for training” says Jennifer.

Issues with what information is used to train the model also creates the need for stringent due diligence in procurement and contracting processes.

Leading with good governance: KPMG’s trusted AI framework

While AI is immensely powerful, on its own, it is devoid of judgment. The importance of maintaining a ‘human in the loop’ during experimentation, coupled with the requirement for the human to take responsibility for how the outputs of the tools are – or are not – used, is essential. All users in higher education communities, students included, need to build the necessary skills to support this goal.

KPMG follows a comprehensive AI framework to guide internal and external development of AI. The framework has 10 ethical pillars, highlighting critical areas that organizations should consider as their AI roadmaps evolve. At the centre of this framework are three foundational principles: AI must be value-led, human-centric, and trustworthy.

This framework calls on a multi-disciplinary approach to ethical and responsible use of AI. In addition to technical skills, expertise in fields such as ethics, philosophy, sociology, and sustainability provide a great lens to evaluate the implications of AI integration. Given the range of expertise among faculties, higher education institutions are uniquely positioned to foster a multi-disciplinary approach that can positively contribute to governance strategies more broadly.

The cyclical nature of innovation: A look at the past to understand the future of AI

As society grapples with the rapid advancements in AI technology, it is important to remember that we have been here before. Throughout history, technological breakthroughs have always prompted us to stop and consider their implications and risks.

Higher education was certainly disrupted and changed by the advent of widespread network connectivity and affordable personal computers in the 1990s. But it wasn’t the end of the line for teachers that some foretold. Rather, they adapted their practice, embraced opportunities, and mitigated challenges to the best extent possible. In the world of education tech, time looks more like a flat circle than a straight line.

Current discussions around trusted AI raise important questions: Why, as a society, didn’t we think about data and technology implications like bias or accountability this deeply sooner? The principles of trusted AI, from both a technical and human perspective, resonate broadly across organizations and technology stacks, regardless of where organization are on their digital transformation journeys.

With AI content coming at us from all directions, it can be difficult to know what is next, where to focus, and what to do about it. If your organization does not have a holistic data governance action plan in place, this is your cue to act.

Jennifer Burns explains that in an era of AI, our thinking needs to extend beyond data governance to other types of IT governance.

The trick is adapting the governance to be able to respond at the speed of change. A good governance strategy is tech-agnostic, allowing for new evolutions of technology to emerge and be assessed with the same level of rigor. This spans existing critical IT governance initiatives like privacy impact assessments, security assessments, architectural reviews and resource planning.

These mechanisms directly apply to sustainable industry adoption of AI which most organizations already have in place. The defining factor that will differentiate rate of adoption rests on the governance structures in place, and the collective human ability to respond to the speed of change currently being experienced by organizations. By establishing a solid foundation in holistic governance, organizations will be better prepared during the process, from assessment to deployment.

Relevant and recent posts

Subscribe to the WWSG newsletter.

Check Availability

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

0
Speaker List
Share My List