Connect with us

Technology

If Canada’s health-care system is still struggling with digital transformation, is it ready for AI?

One expert says a main reason why the Canadian healthcare system hasn’t been digitally transformed is the complexity and location of its data

Published

on

Share this:

Artificial intelligence (AI) is coming to transform every industry, but advanced data usage in Canada’s health-care system lags well behind other major industries.

However, we still might be sitting on the cusp of a wave of AI-driven health care innovation that could make life simpler for patients and clinicians alike, all while improving overall health care outcomes across the country.

“At a national level, we really have a unique opportunity to apply AI research and solutions to modernize health care, address the challenges we’re facing in our health-care system, and improve the overall quality of care that we provide for patients,” said Azra Dhalla, director of AI implementation at the Vector Institute, a non-profit corporation dedicated to research in the field of AI.

Dhalla works with stakeholders across academia in hospitals and public health agencies on the responsible deployment of AI solutions in clinical environments. 

What does AI-driven health care innovation look like? 

Dhalla says there are three particular areas worth noting.

  • “The first is personalized or precision medicine. With the use of AI, it will enable easier and earlier detection of patient health changes and also prediction of disease. Through predictive analytics, we’re able to speed up the diagnosis and decision making capabilities. And that also increases the amount of time physicians get to spend with patients.”
  • “The second is increasing health system efficiencies to target resources more efficiently, which improves both system performance and patient outcomes.”
  • “The third is in the area of drug discovery where we can use AI to analyze data and find ways to use existing drugs to treat conditions the drug may not be currently prescribed for, like existing and emerging viruses.”

Here are a few examples of how AI is improving health care:

  • IBM’s Watson for Health is being used to significantly cut the time to market and cost of new drugs.
  • AI is being used for early detection of diseases like Cancer and Alzheimer’s.
  • AI-driven predictive analytics are empowering clinicians’ decision making and prioritization efforts.

AI may not ever replace doctors, but the fact that it’s even a possibility shows how much change might be coming.

“I’m excited about what’s coming with AI,” said Mary Jane Dykeman, managing partner at INQ Law, a Toronto health and data law firm. “But there’s the excitement and then there’s rolling up your sleeves and getting to work and getting it done.”

But what does “getting it done” mean, exactly? What are the issues preventing us from realizing AI’s full potential in our health-care system? Or even the benefits of plain old garden-variety digital transformation?

It’s a long list of issues that need to be sorted through – ranging from education to data privacy, data security, and beyond. But it all starts with actually making our health care data accessible. That’s step one – and it’s a big one.

Before you can leverage AI in health care, you have to make the data available

One of the main reasons that the Canadian health-care system hasn’t been digitally transformed is the complexity and location of its data. 

Every health-care organization – from clinic to hospital to regional health authority – has massive volumes of data. And that data may be housed in a variety of systems. Some of it is still on paper, some of it is duplicated between paper and computers. Every region is a bit different. So, while there are other issues with digital transformation in health care, data accessibility is the starting point. And as part of that, we need to think about both the patients and the health-care providers that will be accessing that data. It’s a two-sided equation.

“We need to clean up the data so it’s usable and meaningful,” said Dykeman. “And we need to ground it in the patient experience if we’re going to transform the health system across Canada.” 

Will Canadians approve of their health care data being used in AI?

Canadians are increasingly aware of the frequency of cyber security issues. Think of the well-publicized recent data breach that exposed the health data of mothers, newborn children and parents in Ontario, and the cyberattack on national book retailer Indigo.

The use of AI in health care raises questions for Canadians about how their personal data will be used and kept safe. How comfortable will the average Canadian – especially seniors that make more frequent use of the health-care system – be with their data being used by AI?

Privacy and ethics considerations loom large here.

“Privacy rules have been in place in Canada for many years,” said Dykeman. “Now many governments are modernizing their privacy legislation, because suddenly we have shared systems and electronic records and digital opportunities – and legislation needs to reflect that. Privacy is the bedrock though. It’s not a one-and-done thing. It is a constant commitment. And if we get it right, it opens all kinds of doors for us.”

Dhalla expands on that:

“When it comes to diagnostics, for example, the greater the number of X-Ray images we show an AI and the more diverse the data we show it, the more accurate its diagnosis and predictions will be – so the benefits from accessing this vast amount of data for patients and providers can’t be overstated,” she said. 

“But there’s a stringent process as it relates to privacy with respect to health data. And rightly so. As AI becomes progressively more ingrained in our health services, there’s a need to build regulatory frameworks across the industry and governments. In fact, regulators and medical organizations are already developing guardrails to address these issues.”

So, privacy and policy are critical pillars. 

But they’re not the only ones.

Canada needs to focus on (digitally) improving the patient & clinician experience

The bottom line is that digital transformation – and AI use – is expanding almost daily. And health care for a child born today will look very different when they’re an adult than it does today. In a good way.

That’s why Dykeman believes we all need to rethink the prevailing narrative around AI, data, privacy, and health care. She believes we need to tell a more compelling, positive story. As she noted to Vog App Developers:

“The public deserves transparency about their data – both the negative and positive stories. They don’t know what is possible, because all they hear is the negative, about the last big data breach. Patient-centric design includes them and can bring to them the same excitement we have about the tremendous opportunities to advance our health system with data. Because these transformations will ultimately help them, and if not them personally, others around them.”

There are two key areas where change management will be key:

Health-care workers

Dykeman: “If I’ve been working in health care for years, and I’m quite used to the way I do things, and someone comes along and tells me they’re going to change everything, that’s challenging. But if that change will lead to improvements for patients and clinicians and the family members and caregivers that accompany the patient through the system, that’s different. I can understand that and embrace it.”

Dhalla: “At Vector, we’re helping to work with health-care leaders and clinicians to really change the narrative from fear about AI to how it can really help them augment care. We’re supporting them through this change, providing them with opportunities for learning, knowledge translation, and upskilling.”

Patients

Dykeman: “We need to make it so much easier for patients, because there’s a long legacy of them having to repeat the same information at every step of the way in the health care journey. Or perhaps they show up for an appointment and some aspect of their information is lost, and they have to ask them the same questions again and again. That can be changed. It’s also worth noting that a patient may not even be their own best heath historian – depending on the nature of their situation or condition.”

Dhalla: “It’s important we communicate to patients that it’s de-identified data we’re using, so patients know that we don’t actually have access to, for example, their names. In fact, what we’re doing is building more generalizable models that can be used across patient populations. These types of conversations can help alleviate some of the concerns patients are having.”

Building our digital and AI health care future in clinics and medical schools

There’s a whole new generation of health-care workers who will be the tip of the spear when it comes to digital transformation. Universities are recognizing the potential and need to prepare their students for this data-based future by starting to change curriculums. Ultimately, someone studying to be a doctor or nurse in 2030 could find AI as common a tool as a stethoscope. Such is the rate of change.

But Dykeman also believes that big change must start at a smaller level.

“Organizations need to ask themselves: ‘what can we do with this data? What are some of the pain points that patients and clinicians experience each day? What are the use cases that we can develop?’” she said. 

“I think there’s a great opportunity to crowdsource ideas even within individual organizations. Ask the people who are inside them. What are the little problems you’d like to solve? What are the big problems you’d like to solve? How do we get there? We have big audacious goals, but small movements will push them forward piece-by-piece. System transformation is not one big thing. It’s a series of little things.”

Share this:

Business

How businesses can protect themselves from the rising threat of deepfakes

Dive into the world of deepfakes and explore the risks, strategies and insights to fortify your organization’s defences

Published

on

Share this:

In Billy Joel’s latest video for the just-released song Turn the Lights Back On, it features him in several deepfakes, singing the tune as himself, but decades younger. The technology has advanced to the extent that it’s difficult to distinguish between that of a fake 30-year-old Joel, and the real 75-year-old today.

This is where tech is being used for good. But when it’s used with bad intent, it can spell disaster. In mid-February, a report showed a clerk at a Hong Kong multinational who was hoodwinked by a deepfake impersonating senior executives in a video, resulting in a $35 million theft.

Deepfake technology, a form of artificial intelligence (AI), is capable of creating highly realistic fake videos, images, or audio recordings. In just a few years, these digital manipulations have become so sophisticated that they can convincingly depict people saying or doing things that they never actually did. In little time, the tech will become readily available to the layperson, who’ll require few programming skills.

Legislators are taking note

In the US, the Federal Trade Commission proposed a ban on those who impersonate others using deepfakes — the greatest concern being how it can be used to fool consumers. The Feb. 16 ban further noted that an increasing number of complaints have been filed from “impersonation-based fraud.”

A Financial Post article outlined that Ontario’s information and privacy commissioner, Patricia Kosseim, says she feels “a sense of urgency” to act on artificial intelligence as the technology improves. “Malicious actors have found ways to synthetically mimic executive’s voices down to their exact tone and accent, duping employees into thinking their boss is asking them to transfer funds to a perpetrator’s account,” the report said. Ontario’s Trustworthy Artificial Intelligence Framework, for which she consults, aims to set guides on the public sector use of AI.

In a recent Microsoft blog, the company stated their plan is to work with the tech industry and government to foster a safer digital ecosystem and tackle the challenges posed by AI abuse collectively. The company also said it’s already taking preventative steps, such as “ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system” as well as using watermarks and metadata.

That prevention will also include enhancing public understanding of the risks associated with deepfakes and how to distinguish between legitimate and manipulated content.

Cybercriminals are also using deepfakes to apply for remote jobs. The scam starts by posting fake job listings to collect information from the candidates, then uses deepfake video technology during remote interviews to steal data or unleash ransomware. More than 16,000 people reported that they were victims of this scam to the FBI in 2020. In the US, this kind of fraud has resulted in a loss of more than $3 billion USD. Where possible, they recommend job interviews should be in person to avoid these threats.

Catching fakes in the workplace

There are detector programs, but they’re not flawless. 

When engineers at the Canadian company Dessa first tested a deepfake detector that was built using Google’s synthetic videos, they found it failed more than 40% of the time. The Seattle Times noted that the problem in question was eventually fixed, and it comes down to the fact that “a detector is only as good as the data used to train it.” But, because the tech is advancing so rapidly, detection will require constant reinvention.

There are other detection services, often tracing blood flow in the face, or errant eye movements, but these might lose steam once the hackers figure out what sends up red flags.

“As deepfake technology becomes more widespread and accessible, it will become increasingly difficult to trust the authenticity of digital content,” noted Javed Khan, owner of Ontario-based marketing firm EMpression. He said a focus of the business is to monitor upcoming trends in tech and share the ideas in a simple way to entrepreneurs and small business owners.

To preempt deepfake problems in the workplace, he recommended regular training sessions for employees. A good starting point, he said, would be to test them on MIT’s eight ways the layperson can try to discern a deepfake on their own, ranging from unusual blinking, smooth skin, and lighting.

Businesses should proactively communicate through newsletters, social media posts, industry forums, and workshops, about the risks associated with deepfake manipulation, he told DX Journal, to “stay updated on emerging threats and best practices.”

To keep ahead of any possible attacks, he said companies should establish protocols for “responding swiftly” to potential deepfake attacks, including issuing public statements or corrective actions.

How can a deepfake attack impact business?

The potential to malign a company’s reputation with a single deepfake should not be underestimated.

“Deepfakes could be racist. It could be sexist. It doesn’t matter — by the time it gets known that it’s fake, the damage could be already done. And this is the problem,” said Alan Smithson, co-founder of Mississauga-based MetaVRse and investor at Your Director AI.

“Building a brand is hard, and then it can be destroyed in a second,” Smithson told DX Journal. “The technology is getting so good, so cheap, so fast, that the power of this is in everybody’s hands now.”

One of the possible solutions is for businesses to have a code word when communicating over video as a way to determine who’s real and who’s not. But Smithson cautioned that the word shouldn’t be shared around cell phones or computers because “we don’t know what devices are listening to us.”

He said governments and companies will need to employ blockchain or watermarks to identify fraudulent messages. “Otherwise, this is gonna get crazy,” he added, noting that Sora — the new AI text to video program — is “mind-blowingly good” and in another two years could be “indistinguishable from anything we create as humans.”

“Maybe the governments will step in and punish them harshly enough that it will just be so unreasonable to use these technologies for bad,” he continued. And yet, he lamented that many foreign actors in enemy countries would not be deterred by one country’s law. It’s one downside he said will always be a sticking point.

It would appear that for now, two defence mechanisms are the saving grace to the growing threat posed by deepfakes: legal and regulatory responses, and continuous vigilance and adaptation to mitigate risks. The question remains, however, whether safety will keep up with the speed of innovation.

Share this:
Continue Reading

Business

The new reality of how VR can change how we work

It’s not just for gaming — from saving lives to training remote staff, here’s how virtual reality is changing the game for businesses

Published

on

Share this:

Until a few weeks ago, you might have thought that “virtual reality” and its cousin “augmented reality” were fads that had come and gone. At the peak of the last frenzy around the technology, the company formerly known as Facebook changed its name to Meta in 2021, as a sign of how determined founder Mark Zuckerberg was to create a VR “metaverse,” complete with cartoon avatars (who for some reason had no legs — they’ve got legs now, but there are some restrictions on how they work).

Meta has since spent more than $36 billion on metaverse research and development, but so far has relatively little to show for it. Meta has sold about 20 million of its Quest VR headsets so far, but according to some reports, not many people are spending a lot of time in the metaverse. And a lack of legs for your avatar probably isn’t the main reason. No doubt many were wondering: What are we supposed to be doing in here?

The evolution of virtual reality

Things changed fairly dramatically in June, however, when Apple demoed its Vision Pro headset, and then in early February when they were finally available for sale. At $3,499 US, the device is definitely not for the average consumer, but using it has changed the way some think about virtual reality, or the “metaverse,” or whatever we choose to call it.

Some of the enhancements that Apple has come up with for the VR headset experience have convinced Vision Pro true believers that we are either at or close to the same kind of inflection point that we saw after the release of the original iPhone in 2007.Others, however, aren’t so sure we are there yet.

The metaverse sounds like a place where you bump into giant dinosaur avatars or play virtual tennis, but ‘spatial computing’ puts the focus on using a VR headset to enhance what users already do on their computers. Some users generate multiple virtual screens that hang in the air in front of them, allowing them to walk around their homes or offices and always have their virtual desktop in front of them.

VR fans are excited about the prospect of watching a movie on what looks like a 100-foot-wide TV screen hanging in the air in front of them, or playing a video game. But what about work-related uses of a headset like the Vision Pro? 

Innovating health care with VR technology

One of the most obvious applications is in medicine, where doctors are already using remote viewing software to perform checkups or even operations. At Cambridge University, game designers and cancer researchers have teamed up to make it easier to see cancer cells and distinguish between different kinds.

Heads-up displays and other similar kinds of technology are already in use in aerospace engineering and other fields, because they allow workers to see a wiring diagram or schematic while working to repair it. VR headsets could make such tasks even easier, by making those diagrams or schematics even larger, and superimposing them on the real thing. The same kind of process could work for digital scans of a patient during an operation.

Using virtual reality, patients and doctors could also do remote consultations more easily, allowing patients to describe visually what is happening with them, and giving health professionals the ability to offer tips and direct recommendations in a visual way. 

This would not only help with providing care to people who live in remote areas, but could also help when there is a language barrier between doctor and patient. 

Impacting industry worldwide

One technology consulting firm writes that using a Vision Pro or other VR headset to streamline assembly and quality control in maintenance tasks. Overlaying diagrams, 3D models, and other digital information onto an object in real time could enable “more efficient and error-free assembly processes,” by providing visual cues, step-by-step guidance, and real-time feedback. 

In addition to these kinds of uses, virtual reality could also be used for remote onboarding for new staff in a variety of different roles, by allowing them to move around and practice training tasks in a virtual environment.

Some technology watchers believe that the retail industry could be transformed by virtual reality as well. Millions of consumers have become used to buying online, but some categories such as clothing and furniture have lagged, in part because it is difficult to tell what a piece of clothing might look like once you are wearing it, or what that chair will look like in your home. But VR promises the kind of immersive experience where that becomes possible.

While many consumers may see this technology only as an avenue for gaming and entertainment, it’s already being leveraged by businesses in manufacturing, health care and workforce development. Even in 2020, 91 per cent of businesses surveyed by TechRepublic either used or planned to adopt VR or AR technology — and as these technological advances continue, adoption is likely to keep ramping up.

Share this:
Continue Reading

Business

5 tips for brainstorming with ChatGPT

How to avoid inaccuracy and leverage the full creative reign of ChatGPT

Published

on

Share this:

ChatGPT recruited a staggering 100 million users by January 2023. As software with one of the fastest-growing user bases, we imagine even higher numbers this year. 

It’s not hard to see why. 

Amazon sellers use it to optimize product listings that bring in more sales. Programmers use it to write code. Writers use it to get their creative juices flowing. 

And occasionally, a lawyer might use it to prepare a court filing, only to fail miserably when the judge notices numerous fake cases and citations. 

Which brings us to the fact that ChatGPT was never infallible. It’s best used as a brainstorming tool with a skeptical lens on every output. 

Here are five tips for how businesses can avoid inaccuracy and leverage the full creative reign of generative AI when brainstorming.

  1. Use it as a base

Hootsuite’s marketing VP Billy Jones talked about using ChatGPT as a jumping-off point for his marketing strategy. He shares an example of how he used it to create audience personas for his advertising tactics. 

Would he ask ChatGPT to create audience personas for Hootsuite’s products? Nope, that would present too many gaps where the platform could plug in false assumptions. Instead, Jones asks for demographic data on social media managers in the US — a request easy enough for ChatGPT to gather data on. From there he pairs the output with his own research to create audience personas. 

  1. Ask open-ended questions

You don’t need ChatGPT to tell you yes or no — even if you learn something new, that doesn’t really get your creative juices flowing. Consider the difference: 

  • Does history repeat itself? 
  • What are some examples of history repeating itself in politics in the last decade?

Open-ended questions give you much more opportunity to get inspired and ask questions you may not have thought of. 

  1. Edit your questions as you go

ChatGPT has a wealth of data at its virtual fingertips to examine and interpret before spitting out an answer. Meaning you can narrow down the data for a more focused response with multiple prompts that further tweak its answers. 

For example, you might ask ChatGPT about book recommendations for your book club. Once you get an answer, you could narrow it down by adding another requirement, like specific years of release, topic categories, or mentions by reputable reviewers. Adding context to what you’re looking for will give more nuanced answers.

  1. Gain inspiration from past success

Have an idea you’re unsure about? Ask ChatGPT about successes with a particular strategy or within a particular industry. 

The platform can scour through endless news releases, reports, statistics, and content to find you relatable cases all over the world. Adding the word “adapt” into a prompt can help utilize strategies that have worked in the past and apply them to your question. 

As an example, the prompt, “Adapt sales techniques to effectively navigate virtual selling environments,” can generate new solutions by pulling from how old problems were solved. 

  1. Trust, but verify

You wouldn’t publish the drawing board of a brainstorm session. Similarly, don’t take anything ChatGPT says as truth until you verify it with your own research. 

The University of Waterloo notes that blending curiosity and critical thinking with ChatGPT can help to think through ideas and new angles. But, once the brainstorming is done, it’s time to turn to real research for confirmation.

Share this:
Continue Reading

Featured