Connect with us

Technology

Artificial intelligence needs to reset

Published

on

Share this:

I had the chance to interview my colleague at ArCompany, Karen Bennet, a seasoned engineering executive in platform technology, open and closed source systems and artificial intelligence technology. A former engineering lead from Yahoo!, and part of the original team who brought Red Hat to success, Karen evolved with the technological revolution, utilizing AI in expert systems in her early IBM days, and is currently laying witness to the rapid experimentation in machine learning and deep learning. Our discussions about the current state of AI have culminated into this article.

It’s difficult to navigate AI amidst all the hype. The promises of AI, for the most part, have not come to fruition. AI is still emerging and has not become the pervasive force that has been promised. Consider the compelling stats that validate excitement in the AI hype:

  • 14X increase in the number of active AI startups since 2000
  • Investment into AI start-ups by VCs has increased 6X since 2000
  • The share of jobs requiring AI skills has grown 4.5X since 2013

As of 2017, Statista put out these findings:

As of last year, only 5% of businesses worldwide have incorporated AI extensively into their processes and offerings, 32% have not yet adopted, and 22% do not have plans to.

Statista: Adoption level of artificial intelligence (AI) in business organizations worldwide, as of 2017

Filip Pieniewski confirmed in his recent post on Venturebeat: “The AI winter is well on its way”:

We are now in the middle of 2018 and things have changed. Not on the surface yet — the NIPS conference is still oversold, corporate PR still has AI all over its press releases, Elon Musk still keeps promising self-driving cars, and Google keeps pushing Andrew Ng’s line that AI is bigger than electricity. But this narrative is beginning to crack.

We touted the claims of the autonomous driving car. Earlier this spring the death of a pedestrian to a self-driving vehicle raised alarms that went beyond the technology and called to question the ethics or lack thereof behind the decisions of an automated system. The trolley problem is not a simple binary choice between the life of one person to save 5 people but rather evolves into a debate of conscience, emotion and perception that now complicates the path to which a reasonable decision can be made by a machine. The conclusion from this article states:

But the dream of a fully autonomous car may be further than we realize. There’s a growing concern among AI experts that it may be years, if not decades before self-driving systems can reliably avoid accidents.

To use history as a predictor, both cloud and the dot net industries took about 5 years before they started impacting people in a significant way, and almost 10 years before these industries influenced major shifts in the market. We are envisioning a similar timeline for Artificial Intelligence. As Karen explains,

To enable adoption by everyone, a product needs to be in place, one that is scalable and one that can be used by everyone–not just data scientists. This product will need to take into account the data lifecycle of capturing data, preparing it, training models and predicting. With data being stored in the cloud, data pipelines can continuously extract and prepare them to train the models which will make the predictions. The models need to continuously improve from new training data, which, in turn, will keep the models relevant and transparent. That is the objective and the promise.

Building AI Proof of Concepts with No Significant Use Cases

Both Karen and I have come from technology and AI start-ups. What we’ve witnessed and what we’ve realized among discussion with peers within the AI community is the widespread experimentation across a multitude of business issues, which tend to stay in the labs.

This recent article substantiates the widespread AI pilots that are more common today:

Vendors of AI technology are often incentivized to make their technology sound more capable than it is – but hint at more real-world traction than they actually have… Most AI applications in the enterprise are little more than ‘pilots.’ Primarily, vendor companies that sell marketing solutions, healthcare solutions and finance solutions in artificial intelligence are simply test-driving the technology. In any given industry, we find that of the hundreds of vendor companies selling AI software and technology, only about one in three will actually have the requisite skills to do artificial intelligence in the first place.

VCs are realizing they may not see a return on their investments for some time. However, ubiquitous experimentation with very few models seeing daylight is just one of the reasons why AI is not ready for prime time.

Can Algorithms be Accountable?

We’ve heard of AI “black-box,” a current approach that has no visibility into how decisions are made. This practice runs in the face of banks and large institutions that have compliance standards and policies that mandate accountability. With systems operating as black boxes, there may be an inherent trust put in algorithms as long as the creation of these algorithms have been reviewed and have met some standards by critical stakeholders. This notion has been quickly disputed given the overwhelming evidence of faulty algorithms in production and the unexpected and harmful outcomes that result from them. Many of our simple systems operate as black boxes beyond the scope of any meaningful scrutiny because of intentional corporate secrecy, the lack of adequate education and understanding how to critically examine the inputs, the outcomes and most importantly, why these results occurred. Karen concurs,

The AI industry today is at a very early stage of being enterprise-ready. AI is very useful and ready for discovery and aiding in parsing through significant amounts of data, however, it still requires human intervention as a guide to evaluate and act on the data and their outcomes.

Karen clarifies that machine learning techniques today enable data to be labelled to identify insights. However, as part of this process, if some the data are erroneously labelled, or if there is not enough data representation, or if there are problematic data signifying bias, bad decision-making results are likely to occur. She also attests current processes continue to be refined:

Currently, AI is all about decision support, to provide insights into a form for which business can draw conclusions. In the next phase of AI, which automates actions from the data, there are additional issues that need to be addressed like bias, explainability, privacy, diversity, ethics, and continuous model learning.

Karen illustrates an example of an AI model making mistakes is seen when image captioning exposes the knowledge learned by training on images labelled with the objects they contain. This suggests that having a common sense world model of objects and people is required for an AI product to truly understand. A model only exposed to the limited number of labelled objects and limited variety in the training set will limit the efficacy of this common sense world model. Research into determining how a model treats its inputs and reaches its conclusions, in human understandable terms, is needed for enterprise. Amazon’s release of Rekognition, its facial recognition technology is an example of a technology currently in production and licensed for use while noticeable gaps exist in its effectiveness. According to a study released by the ACLU:

…the technology managed to confuse photos of 28 members of Congress with publicly available mug shots. Given that Amazon actively markets Rekognition to law enforcement agencies across the US, that’s simply not good enough.

Joy Buolamwini, and MIT graduate and Founder of Algorithmic Justice League in this latest interview called for a moratorium on this technology stating it was ineffective and needed more oversight, and has appealed for more government standards into these types of systems before they are publicly released.

AI’s Major Impediments: Mindset, Culture and Legacy

Having to transform from legacy systems is the top barrier to implement AI into many organizations today. Mindset and culture are elements of these legacy systems that provide a systemic view into the established process, values, and business rules that have dictated, not only how organizations operate, but also why these ingrained elements will create significant hurdles for business, especially when things are currently humming nicely. Therefore, there is no real incentive to dismantle infrastructures at the moment.

AI is a component of business transformation and while that topic has gained as much buzz as the AI hype, the investment and commitment required to make significant changes are met with hesitation. We’ve heard from companies willing to experiment on specific use cases but are unprepared for the requirements to train, re-engineer process, and revamp governance and corporate policies. For larger organizations who are compelled to make these significant investments, the question shouldn’t be one of return on investment, but rather, sustainable competitive advantage.

The Problems with Data Integrity

AI today needs massive amounts of data to be able to produce meaningful results but is unable to leverage experiences from another application. While Karen argued there is work in progress to overcome these limitations, the transfer of learnings is needed before models can be applied in a scalable way. There are scenarios, however, where AI can be used effectively today, such as revealing insights in images, voice, video and being able to translate languages.

Companies are learning that focus should be on:

1) diversity in the data, which includes proper representation across populations

2) ensuring diverse experiences, perspectives and thinking into the creation algorithms

3) prioritizing quality of the data over than quantity

These are important especially as bias is introduced and trust and confidence in data degrade. For example, Turkish is a gender-neutral language, but the AI model in Google translator incorrectly predicts the gender when translating to English. As well, cancer spotting AI image recognition is only trained on fair-skinned people. From the computer vision example above, Joy Buolamwini tested these AI technologies and realized they worked more effectively on male vs. female, and on lighter vs. darker skin. The “error rates were as low as 1% on males and as high as 35% on dark females.” These issues occur because of the failure to use diverse training data. Karen concedes,

The concept of AI is simple but the algorithms get smarter by ingesting more and more real-world data, however, being able to explain the decisions becomes extremely difficult. The data may be continuously changing and AI models require filters to prevent incorrect labelling such as an image of a black man being labelled as a gorilla or a panda becoming labelled as a gibbon. Enterprises relying on faulty data to make decisions will lead to ill-informed results.

Fortunately, given AI’s nascency, very few organizations are making significant business decisions from the data today. From what we’ve witnessed most solutions produce mainly product recommendations and personalizing marketing communication. Any wrong conclusions that result from these have less societal impacts… at least for now.

Using data to make business decisions is not new, but what has changed is the exponential increase in volume and mix of structured and unstructured data being used. AI enables us to use data from their source continuously and obtain insight much faster. This is an opportunity for businesses that have the capacity and structure to handle data volume from diverse sources. However, for other organizations, the masses of data can represent a risk because of the divergent sources and formats that make it more difficult to transform the information: emails, system logs, web pages, customer transcripts, documents, slides, informal chats, social networks and exploding rich media like images and video. Data transformation continues to be a stumbling block towards developing clean data sets, hence effective models.

Bias is More Prevalent than We Realize

Bias exists in many business models to minimize risk assessments, and optimize targeting opportunities and while they may produce profitable business results, they have been known to result in unintended consequences that cause individual harm and deepen economic disparities. Insurance companies may use location information or credit score data to issue higher premiums to poorer customers. Banks may approve prospects with lower credit scores, who are already debt-ridden but may be unable to afford the higher lending rates.

There is a heightened caution surrounding bias because the introduction of AI will not only perpetuate existing biases, the result from these learning models may generalize to the point it will deepen the economic and societal divide. Bias presents itself in current algorithms to determine the likelihood of recidivism (the likelihood to re-offend) like COMPAS. The Correction Offender Management Profiling for Alternative Sanctions (COMPAS) was created by a company known as Northpointe. The goal of COMPAS was to assess the risk and prediction of criminality for defendants in pre-trial hearings. The types of questions used in the initial COMPAS research revealed enough human bias that the system perpetuated recommendations that unintentionally treated blacks, who would never go on to re-offend, more harshly by law than white defendants, who would go on to re-offend and were treated more leniently at the time of sentencing. With no public standard available, Northpointe was able to create its own definition of fairness, and develop an algorithm without third-party assessment… until recently. This article confirmed, “A Popular Algorithm Is No Better at Predicting Crimes Than Random People”

If this software is only as accurate as untrained people responding to an online survey, I think the courts should consider that when trying to decide how much weight to put on them in making decisions

Karen stipulated,

While we try to fix existing systems to minimize this bias, it is critical that models train on diverse sets of data to prevent future harms.

Given the potential risks to faulty models pervading business and society, businesses do not have governance mechanisms to police for unfair or immoral decisions that will inadvertently impact the end consumer. This is discussed under ethics.

The Increasing Demand for Privacy

Karen and I came from Yahoo! We worked with strong research and data teams that were able to contextualize behaviour from users across our platform. We continuously studied user behaviour and understood their propensities across our multitude of properties from Music, to Homepage, to Lifestyle, News etc. At that time, there was there were no strict standards or regulation for data use. Privacy was relegated to user passive agreements of the platform’s terms and conditions, similar to today.

The recent Cambridge Analytica/Facebook scandal has brought the personal data privacy front and centre. Frequent data breaches occurring at major credit institutions like Equifax and most recently, Facebook and Google + continue to compound this issue. The issue of ownership, consent and erroneous contextualization makes this a ripe topic as AI continues to iron out its kinks. The European General Data Protection Regulation (GDPR) which has come into effect May 25, 2018, will change the game for organizations, particularly those who collect, store and analyze personal user information. It will change the rules for which business operated under for many years. The unbridled use of personal information has come to a head, as the business will now come to the realization there will be significant limitations on data use and more importantly, ownership.

We are seeing the early effects of this in location-based advertising. This $75 billion industry which is slated to grow by a 5-year 21% CAGR by 2021 continues to be impeded by the oligopoly of Facebook and Google, securing the bulk of revenues. And now, the GDPR raises the stakes to make these ad-tech companies more accountable :

Twitter @hessiejones

Twitter @hessiejones

The stakes are high enough that [advertisers] have to have a very high degree of confidence that what you’re being told is actually in compliance. It seems like there is enough general confusion about what will ultimately constitute a violation that people are taking a broad approach to this until you can get precise about what compliance looks like.

While regulation will eventually cripple revenues, at least for the moment, the mobile and ad platform industries are also facing increasing scrutiny from the very subjects they have been monetizing for many years: the consumer. This, coupled with the examination around established practices, will force the industry to shift their practices in the collection, aggregation, analysis, and sharing of user information.

Operationalizing privacy will take time, significant investment (a topic that needs to be afforded more attention), and a change in mindset that will impact organizational policy, process, and culture.

The Inevitable Coupling of AI & Ethics

The prevailing factor of AI ensures societal benefits, including streamlining processes, increasing convenience, improving products and services, and detecting potential harms through automation. Ceding to the latter means readily measuring inputs/outputs against outcomes in renewed manufacturing processes, service, and assessment solutions, production as well as product quality.

As discussions and news about AI persist, this term, “AI” coupled with “ethics” reveals increasingly grave concerns where AI technology can inflict societal damage that will test human conscience and values.

Tech Cos Confront Ethics of AI

CB Insights: Tech Cos Confront Ethics of AI

Beyond individual privacy concerns, today we are seeing examples of innovation that border on the unconscionable. As stated previously, Rekognition and Face++ are being used in law enforcement and citizen surveillance while the technology is deemed to be faulty. Employees walked out in protest of Google’s decision to provide artificial intelligence to the Defense Department for the analysis of drone footage, with the goal of creating a sophisticated system to surveil cities in a project known as Project Maven. The same tech giant is also building Project Dragonfly for China, a censored search engine that also has the ability to map individual searches to identity.

Decision-makers and regulators will need to instill new process and policies to properly assess how AI technologies are being used, for what purpose and whether there may be unintended fallout in the process. Karen pointed to new questions in determining the use of data in AI algorithms that will need to be considered:

How do we detect sensitive data fields and anonymize them while preserving the important features of a dataset? Can we train on synthetic data as an alternative in the short term? The question we need to ask ourselves when creating the algorithm: What fields do we require to deliver the outcomes we want? In addition, what parameters should we create to define “fairness” in the models, meaning does this treat two individuals differently? And if so, why? How do we continuously monitor for this within our systems?

An AI Winter is a Serendipitous Opportunity to Get AI Ready

AI has come a long way, but still needs more time to mature. In a world of increasing automation and deliberate progress towards increasing cognitive computing capabilities, the impending AI winter has afforded business the necessary time to determine how AI fits into their organization and the problems it wants to solve. The impending casualties of AI need to be addressed in policy, in governance and its impact on individuals and society.

Its impact is far greater in this next industrial revolution as its ubiquity will become more nuanced in our lives. The leading voices of AI from Geoff Hinton, Fei Fei Lee and Andrew Ng, have called on an AI reset because Deep Learning has not yet proven to scale. The promise of AI is not waning, rather the expectations for its real arrival is pushed further out – perhaps 5-10 years. We have time to work these issues on Deep Learning, other AI methods, and the processes to effectively extract value from data. This culmination of business readiness, regulation, education, and research are necessary to bring both business and consumers up to speed and to ensure a regulatory system is in place to properly constrain technology and one that leaves humanity at the helm a little while longer.

About Karen Bennet

Karen Bennet, a Principle Consultant at Arcompany, is an experienced senior engineering leader with more than 25 years in the software development business in both open and closed source solutions. More recently, Karen’s efforts have focused on Artificial Intelligence, enabling enterprise, particularly in the banking and automotive sectors, to experiment with AI/ML. She has extensive leadership engineering positions at Cerebri AI, IBM, Yahoo!, Trapeze and was an early leader, who helped grow Cygnus and Red Hat into sustainable businesses.

 

This post originally appeared on Forbes

 

Share this:

Business

How businesses can protect themselves from the rising threat of deepfakes

Dive into the world of deepfakes and explore the risks, strategies and insights to fortify your organization’s defences

Published

on

Share this:

In Billy Joel’s latest video for the just-released song Turn the Lights Back On, it features him in several deepfakes, singing the tune as himself, but decades younger. The technology has advanced to the extent that it’s difficult to distinguish between that of a fake 30-year-old Joel, and the real 75-year-old today.

This is where tech is being used for good. But when it’s used with bad intent, it can spell disaster. In mid-February, a report showed a clerk at a Hong Kong multinational who was hoodwinked by a deepfake impersonating senior executives in a video, resulting in a $35 million theft.

Deepfake technology, a form of artificial intelligence (AI), is capable of creating highly realistic fake videos, images, or audio recordings. In just a few years, these digital manipulations have become so sophisticated that they can convincingly depict people saying or doing things that they never actually did. In little time, the tech will become readily available to the layperson, who’ll require few programming skills.

Legislators are taking note

In the US, the Federal Trade Commission proposed a ban on those who impersonate others using deepfakes — the greatest concern being how it can be used to fool consumers. The Feb. 16 ban further noted that an increasing number of complaints have been filed from “impersonation-based fraud.”

A Financial Post article outlined that Ontario’s information and privacy commissioner, Patricia Kosseim, says she feels “a sense of urgency” to act on artificial intelligence as the technology improves. “Malicious actors have found ways to synthetically mimic executive’s voices down to their exact tone and accent, duping employees into thinking their boss is asking them to transfer funds to a perpetrator’s account,” the report said. Ontario’s Trustworthy Artificial Intelligence Framework, for which she consults, aims to set guides on the public sector use of AI.

In a recent Microsoft blog, the company stated their plan is to work with the tech industry and government to foster a safer digital ecosystem and tackle the challenges posed by AI abuse collectively. The company also said it’s already taking preventative steps, such as “ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system” as well as using watermarks and metadata.

That prevention will also include enhancing public understanding of the risks associated with deepfakes and how to distinguish between legitimate and manipulated content.

Cybercriminals are also using deepfakes to apply for remote jobs. The scam starts by posting fake job listings to collect information from the candidates, then uses deepfake video technology during remote interviews to steal data or unleash ransomware. More than 16,000 people reported that they were victims of this scam to the FBI in 2020. In the US, this kind of fraud has resulted in a loss of more than $3 billion USD. Where possible, they recommend job interviews should be in person to avoid these threats.

Catching fakes in the workplace

There are detector programs, but they’re not flawless. 

When engineers at the Canadian company Dessa first tested a deepfake detector that was built using Google’s synthetic videos, they found it failed more than 40% of the time. The Seattle Times noted that the problem in question was eventually fixed, and it comes down to the fact that “a detector is only as good as the data used to train it.” But, because the tech is advancing so rapidly, detection will require constant reinvention.

There are other detection services, often tracing blood flow in the face, or errant eye movements, but these might lose steam once the hackers figure out what sends up red flags.

“As deepfake technology becomes more widespread and accessible, it will become increasingly difficult to trust the authenticity of digital content,” noted Javed Khan, owner of Ontario-based marketing firm EMpression. He said a focus of the business is to monitor upcoming trends in tech and share the ideas in a simple way to entrepreneurs and small business owners.

To preempt deepfake problems in the workplace, he recommended regular training sessions for employees. A good starting point, he said, would be to test them on MIT’s eight ways the layperson can try to discern a deepfake on their own, ranging from unusual blinking, smooth skin, and lighting.

Businesses should proactively communicate through newsletters, social media posts, industry forums, and workshops, about the risks associated with deepfake manipulation, he told DX Journal, to “stay updated on emerging threats and best practices.”

To keep ahead of any possible attacks, he said companies should establish protocols for “responding swiftly” to potential deepfake attacks, including issuing public statements or corrective actions.

How can a deepfake attack impact business?

The potential to malign a company’s reputation with a single deepfake should not be underestimated.

“Deepfakes could be racist. It could be sexist. It doesn’t matter — by the time it gets known that it’s fake, the damage could be already done. And this is the problem,” said Alan Smithson, co-founder of Mississauga-based MetaVRse and investor at Your Director AI.

“Building a brand is hard, and then it can be destroyed in a second,” Smithson told DX Journal. “The technology is getting so good, so cheap, so fast, that the power of this is in everybody’s hands now.”

One of the possible solutions is for businesses to have a code word when communicating over video as a way to determine who’s real and who’s not. But Smithson cautioned that the word shouldn’t be shared around cell phones or computers because “we don’t know what devices are listening to us.”

He said governments and companies will need to employ blockchain or watermarks to identify fraudulent messages. “Otherwise, this is gonna get crazy,” he added, noting that Sora — the new AI text to video program — is “mind-blowingly good” and in another two years could be “indistinguishable from anything we create as humans.”

“Maybe the governments will step in and punish them harshly enough that it will just be so unreasonable to use these technologies for bad,” he continued. And yet, he lamented that many foreign actors in enemy countries would not be deterred by one country’s law. It’s one downside he said will always be a sticking point.

It would appear that for now, two defence mechanisms are the saving grace to the growing threat posed by deepfakes: legal and regulatory responses, and continuous vigilance and adaptation to mitigate risks. The question remains, however, whether safety will keep up with the speed of innovation.

Share this:
Continue Reading

Business

The new reality of how VR can change how we work

It’s not just for gaming — from saving lives to training remote staff, here’s how virtual reality is changing the game for businesses

Published

on

Share this:

Until a few weeks ago, you might have thought that “virtual reality” and its cousin “augmented reality” were fads that had come and gone. At the peak of the last frenzy around the technology, the company formerly known as Facebook changed its name to Meta in 2021, as a sign of how determined founder Mark Zuckerberg was to create a VR “metaverse,” complete with cartoon avatars (who for some reason had no legs — they’ve got legs now, but there are some restrictions on how they work).

Meta has since spent more than $36 billion on metaverse research and development, but so far has relatively little to show for it. Meta has sold about 20 million of its Quest VR headsets so far, but according to some reports, not many people are spending a lot of time in the metaverse. And a lack of legs for your avatar probably isn’t the main reason. No doubt many were wondering: What are we supposed to be doing in here?

The evolution of virtual reality

Things changed fairly dramatically in June, however, when Apple demoed its Vision Pro headset, and then in early February when they were finally available for sale. At $3,499 US, the device is definitely not for the average consumer, but using it has changed the way some think about virtual reality, or the “metaverse,” or whatever we choose to call it.

Some of the enhancements that Apple has come up with for the VR headset experience have convinced Vision Pro true believers that we are either at or close to the same kind of inflection point that we saw after the release of the original iPhone in 2007.Others, however, aren’t so sure we are there yet.

The metaverse sounds like a place where you bump into giant dinosaur avatars or play virtual tennis, but ‘spatial computing’ puts the focus on using a VR headset to enhance what users already do on their computers. Some users generate multiple virtual screens that hang in the air in front of them, allowing them to walk around their homes or offices and always have their virtual desktop in front of them.

VR fans are excited about the prospect of watching a movie on what looks like a 100-foot-wide TV screen hanging in the air in front of them, or playing a video game. But what about work-related uses of a headset like the Vision Pro? 

Innovating health care with VR technology

One of the most obvious applications is in medicine, where doctors are already using remote viewing software to perform checkups or even operations. At Cambridge University, game designers and cancer researchers have teamed up to make it easier to see cancer cells and distinguish between different kinds.

Heads-up displays and other similar kinds of technology are already in use in aerospace engineering and other fields, because they allow workers to see a wiring diagram or schematic while working to repair it. VR headsets could make such tasks even easier, by making those diagrams or schematics even larger, and superimposing them on the real thing. The same kind of process could work for digital scans of a patient during an operation.

Using virtual reality, patients and doctors could also do remote consultations more easily, allowing patients to describe visually what is happening with them, and giving health professionals the ability to offer tips and direct recommendations in a visual way. 

This would not only help with providing care to people who live in remote areas, but could also help when there is a language barrier between doctor and patient. 

Impacting industry worldwide

One technology consulting firm writes that using a Vision Pro or other VR headset to streamline assembly and quality control in maintenance tasks. Overlaying diagrams, 3D models, and other digital information onto an object in real time could enable “more efficient and error-free assembly processes,” by providing visual cues, step-by-step guidance, and real-time feedback. 

In addition to these kinds of uses, virtual reality could also be used for remote onboarding for new staff in a variety of different roles, by allowing them to move around and practice training tasks in a virtual environment.

Some technology watchers believe that the retail industry could be transformed by virtual reality as well. Millions of consumers have become used to buying online, but some categories such as clothing and furniture have lagged, in part because it is difficult to tell what a piece of clothing might look like once you are wearing it, or what that chair will look like in your home. But VR promises the kind of immersive experience where that becomes possible.

While many consumers may see this technology only as an avenue for gaming and entertainment, it’s already being leveraged by businesses in manufacturing, health care and workforce development. Even in 2020, 91 per cent of businesses surveyed by TechRepublic either used or planned to adopt VR or AR technology — and as these technological advances continue, adoption is likely to keep ramping up.

Share this:
Continue Reading

Business

5 tips for brainstorming with ChatGPT

How to avoid inaccuracy and leverage the full creative reign of ChatGPT

Published

on

Share this:

ChatGPT recruited a staggering 100 million users by January 2023. As software with one of the fastest-growing user bases, we imagine even higher numbers this year. 

It’s not hard to see why. 

Amazon sellers use it to optimize product listings that bring in more sales. Programmers use it to write code. Writers use it to get their creative juices flowing. 

And occasionally, a lawyer might use it to prepare a court filing, only to fail miserably when the judge notices numerous fake cases and citations. 

Which brings us to the fact that ChatGPT was never infallible. It’s best used as a brainstorming tool with a skeptical lens on every output. 

Here are five tips for how businesses can avoid inaccuracy and leverage the full creative reign of generative AI when brainstorming.

  1. Use it as a base

Hootsuite’s marketing VP Billy Jones talked about using ChatGPT as a jumping-off point for his marketing strategy. He shares an example of how he used it to create audience personas for his advertising tactics. 

Would he ask ChatGPT to create audience personas for Hootsuite’s products? Nope, that would present too many gaps where the platform could plug in false assumptions. Instead, Jones asks for demographic data on social media managers in the US — a request easy enough for ChatGPT to gather data on. From there he pairs the output with his own research to create audience personas. 

  1. Ask open-ended questions

You don’t need ChatGPT to tell you yes or no — even if you learn something new, that doesn’t really get your creative juices flowing. Consider the difference: 

  • Does history repeat itself? 
  • What are some examples of history repeating itself in politics in the last decade?

Open-ended questions give you much more opportunity to get inspired and ask questions you may not have thought of. 

  1. Edit your questions as you go

ChatGPT has a wealth of data at its virtual fingertips to examine and interpret before spitting out an answer. Meaning you can narrow down the data for a more focused response with multiple prompts that further tweak its answers. 

For example, you might ask ChatGPT about book recommendations for your book club. Once you get an answer, you could narrow it down by adding another requirement, like specific years of release, topic categories, or mentions by reputable reviewers. Adding context to what you’re looking for will give more nuanced answers.

  1. Gain inspiration from past success

Have an idea you’re unsure about? Ask ChatGPT about successes with a particular strategy or within a particular industry. 

The platform can scour through endless news releases, reports, statistics, and content to find you relatable cases all over the world. Adding the word “adapt” into a prompt can help utilize strategies that have worked in the past and apply them to your question. 

As an example, the prompt, “Adapt sales techniques to effectively navigate virtual selling environments,” can generate new solutions by pulling from how old problems were solved. 

  1. Trust, but verify

You wouldn’t publish the drawing board of a brainstorm session. Similarly, don’t take anything ChatGPT says as truth until you verify it with your own research. 

The University of Waterloo notes that blending curiosity and critical thinking with ChatGPT can help to think through ideas and new angles. But, once the brainstorming is done, it’s time to turn to real research for confirmation.

Share this:
Continue Reading

Featured