Random Image Display on Page Reload

An AI Cartoon May Interview You for Your Next Job

Jun 7, 2024 6:00 AM

An AI Cartoon May Interview You for Your Next Job

As if trying to land a new gig isn’t demoralizing enough, job seekers are meeting with characters powered by generative AI who are capable of meeting with infinite candidates to judge their skills.

Collage of a cartoon business person at a desk with a browser window and a speech bubble that says Tell me about..

Photo-illustration: WIRED Staff; Getty Images

The cartoon interviewer greets you onscreen. He looks a little young to be asking questions about a job—sort of a cartoon version of Harry Potter, with dark hair and glasses. You can choose other interviewers to speak with instead, representing various genders and races with names like Benjamin, Leslie, and Kristin. Alex, the name given to this AI interviewer, asks about your professional experience, theoretical questions about programming, and then gives out a coding exercise.

Alex is an AI interviewer developed by micro1, a US company that describes itself as an AI recruitment engine for engineers. The tech provides an “enjoyable, gamified, and less-biased interview process,” the company’s founder, Ali Ansari, claims in a demo video of the tech.

The use of AI tools in job hunting is becoming widespread. Career sites like Indeed and LinkedIn have incorporated generative AI tools for job seekers and recruiters into their platforms. There are interviewerchatbots companies can enable, as well as AI tools to help people practice for job interviews. But the use of AI in evaluating candidates has mixed reviews: Some HR tools have been caught making negative judgements on applicants who have Black-sounding names, giving preference to men, or skipping over candidates with employment gaps on résumés.

AI tools in hiring save companies money and time, but the long-term implications for workers have yet to be realized.

Ansari tells WIRED that this tool allows companies to “screen candidates in a much more efficient and accurate manner.” Micro1 splits its model into two formats: Companies can employ the software to interview candidates for specific roles, and rather than picking through a sample size of thousands of applicants, can screen endless masses of candidates with the AI interviewer. Or candidates can go through the process independently to be added to a marketplace of engineers. The internal marketplace has a talent pool of vetted engineers—from India, Argentina, Brazil, and other countries far from American tech hubs—who Ansari describes as “untapped but exceptional.” This, he says, may help diversify who gets to work in top tech jobs. “We become the way into Silicon Valley,” Ansari says.

More than 100,000 people have gone through micro1’s screenings with hopes of being added to its marketplace of engineers, and the company lists a number of tech companies, including DoNotPay (whose CEO has also invested in micro1) among those who have used its system to screen or hire engineers from its marketplace. Ansari says companies are using micro1 to screen as many as 30,000 candidates a month.

Asynchronous video interviews have become more common, with companies turning to prerecorded responses in automated systems to handle screening interviews. This task has become more onerous after a series of layoffs in the past two years have whittled down the number of positions available, and recruiters who post open roles on sites like LinkedIn can receive hundreds or thousands of applicants. And generative AI tools have made it easier for those seeking jobs to bulk apply, creating more applications for recruiters and hiring managers to review—some with little relevance to the role. But while AI is becoming more common on the hiring side, too, some recruiters are wary of the biases it may have, and have steered clear of employing the tools in their decisions.

Of course there’s still bias with AI tools, Ansari says. “Of course there’s also bias with humans. The goal with the AI system is to make it much less biased than humans.” With AI, Ansari explains, the AI interviewer on micro1 won’t pass or fail a candidate; instead, it places them into categories like inexperienced, mid-level, and senior. Then, Ansari says, it’s on the hiring manager or recruiter to decide if the candidate is a good fit for the role. They can also listen to audio recordings of the responses rather than relying solely on the AI interviewer to interpret them.

Zahira Jaser, an associate professor at the University of Sussex Business School, says a lot remains unknown about the impact of AI and asynchronous interviewing—including how the tech affects candidates. Recording oneself can be awkward, and there are no human cues to pick up on from an AI interviewer. After being told to act naturally and put their best foot forward in the already nerve-wracking process of human job interviews throughout their career, people may not know how to show their best self to a chatbot, particularly when they’re up against opaque, built-in biases of AI.

“In the real world, humans are biased. But there are techniques we can use to overcome this human bias,” Jaser says. “In an algorithm-driven bias, this is likely to be very systematic.” For example, some AI hiring tools are trained on profiles of past successful employees, raising concerns that they will repeat past biased hiring practices.

For now, these AI tools don’t have the final say in who gets hired. But they increasingly have sway over which applicants get face time with a real human, and that can have a massive impact on what the workforce looks like going forward.

But if you ask Ansari, there is an alternative path for interviews in the future: He believes job seekers may also use AI-driven avatars to interview for jobs with AI interviewers, relegating the painful, tedious parts of initial job searchers to computers entirely. AI could make “really good matches” between job seekers and companies, Ansari says. “And then the company and the candidate can spend their actual time on a Zoom call or in-person interview.”

Amanda Hoover is a general assignment staff writer at WIRED. She previously wrote tech features for Morning Brew and covered New Jersey state government for The Star-Ledger. She was born in Philadelphia, lives in New York, and is a graduate of Northeastern University.
Staff Writer

Read More

OpenAI Offers a Peek Inside the Guts of ChatGPT

Days after former employees said the company was being too reckless with its technology, OpenAI released a research paper on a method for reverse engineering the workings of AI models.

Will Knight

Google Cut Back AI Overviews in Search Even Before Its ‘Pizza Glue’ Fiasco

Data on how often Google’s new AI Overviews feature appears on search results suggests that the company reduced its visibility even before recommendations like adding glue to pizza sauce went viral.

Kate Knibbs

Chatbot Teamwork Makes the AI Dream Work

Experiments show that asking AI chatbots to work together on a problem can compensate for some of their shortcomings. WIRED enlisted two bots to help plan this article.

Will Knight

How Game Theory Can Make AI More Reliable

Researchers are drawing on ideas from game theory to improve large language models and make them more correct, efficient, and consistent.

Steve Nadis

Scarlett Johansson Says OpenAI Ripped Off Her Voice for ChatGPT

In a scorching statement, Scarlett Johansson claims that after she turned down an invitation to voice ChatGPT, OpenAI brazenly mimicked her distinctive tones anyway.

Will Knight

Google’s AI Overview Search Results Copied My Original Work

Google’s AI feature bumped my article down on the results page, but the new AI Overview at the top still referenced it. What gives?

Reece Rogers

Don’t Let Mistrust of Tech Companies Blind You to the Power of AI

It’s OK to be doubtful of tech leaders’ grandiose visions of our AI future—but that doesn’t mean the technology won’t have a huge impact.

Steven Levy

AI Tools Are Secretly Training on Real Images of Children

A popular AI training dataset is “stealing and weaponizing” the faces of Brazilian children without their knowledge or consent, human rights activists claim.

Vittoria Elliott

Credit belongs to : www.wired.com

Check Also

Openvibe: Unified app for decentralized socials

If you have not heard of Mastodon and/or BlueSky, then you are missing out… a …