Border officials dont have data to address racial bias in facial recognition

first_img Facial recognition Facial recognition is going to be everywhere Share your voice Tags Gender and race pose a challenge for facial recognition. Studies have shown the technology has a harder time identifying women and people with darker skin. Civil rights advocates warn that the shortcomings could adversely affect minorities. Several airports and airlines have rolled out the biometric tech across the US, offering a faster way to board your flights. The technology scans a traveler’s face and matches it with a passport photo provided to the airlines by the State Department. It’ll be used in the top 20 US airports by 2021. CBP says it has a match rate in the high 90th percentile, while a study from the DHS’ Office of Inspector General found that it had a match rate closer to 85%. Customs and Border Protection says the system is getting better. A spokesman for the agency noted that the OIG study drew from a demo in 2017 that looked at the potential for the Traveler Verification Service.”In the current deployment of TVS,” the spokesman said, “CBP has been able to successfully photograph and match over 98% of travelers who have photos in U.S. Government systems.”In addition, CBP is working with the National Institutes of Standards and Technology to analyze the performance of face-matching tech, “including impacts due to traveler demographics and image quality,” the spokesman said. A lack of diverse data is what led to racial bias with facial recognition to begin with. Experts have suggested that photo databases for facial recognition could be using more images of white people than people of color, which skews how effective the technology is for minorities.  Jake Laperruque, a senior counsel at the Constitution Project, is concerned that the agency is turning a blind eye to the potential for racial bias at airports.”The comments reflect a troubling lack of concern about well-documented problem of facial recognition systems having higher error rates for people of color,” Laperruque said in an email. “CBP can’t simply ignore a serious issue and take a ‘see no evil approach’ — if they’re not willing to confront serious civil rights problems and deal with them, they shouldn’t be trusted to operate a program like this.”  Originally published May 6.Updated May 8: Added comment from a CBP spokesman. 10 Comments A woman boarding an SAS flight to Copenhagen goes through facial recognition verification system VeriScan at Dulles International Airport in  Virginia. Jim Watson / AFP/Getty Images Facial recognition technology is prone to errors, and when it comes to racial bias at airports, there’s a good chance it’s not learning from its mistakes. Debra Danisek, a privacy officer with US Customs and Border Protection, talked to an audience Friday at the International Association of Privacy Professionals Summit about what data its facial recognition tech collects — but more importantly, what data it doesn’t collect.”In terms of ‘Does this technology have a different impact on different racial groups?’ we don’t collect that sort of data,” Danisek said. “In terms of keeping metrics on which groups are more affected, we wouldn’t have those metrics to begin with.” In other words, while the CBP does collect data that’s available on people’s passports — age, gender and citizenship — to help improve its facial recognition algorithm, it doesn’t gather data for race and ethnicity, even when a passenger is misidentified.So the CBP doesn’t know when there’s a mismatch based on a person’s skin color. It’s relying on reports from the Department of Homeland Security’s Redress program to identify when that happens.  “If they notice we have a pattern of folks making complaints this process, then we would investigate,” Danisek said. Now playing: Watch this: Politics Security 3:28last_img read more

Olivia Culpo flaunts her figure in barely there swimwear Photos

first_imgOlivia CulpoOlivia Culpo Official Instagram (oliviaculpo)Olivia Culpo took to social media yet again and posted some sizzling snaps of her enjoying the sun. The gorgeous beauty relaxed on a grey lounger in a photo shared to Instagram.The 27-year-old model can be seen showing off her enviable curves as she wore a red-and-white candy cane striped Christian Dior bikini. The beauty captioned the pinup image, ‘Energy flows where attention goes.’ Olivia seems to have gone makeup free for the post. But we have to say she looked gorgeous nonetheless. She sure knows how to rock a swimsuit. She wore no jewellery.Olivia Culpo is a successful model and was also included being featured in Sports Illustrated’s annual Swimsuit edition. The former Miss USA, posed in a series of bikinis, teasing millions of her fans and Instagram followers. Olivia sure seems to be keeping herself busy. She recently did a photoshoot for Maxim magazine. Olivia CulpoOlivia Culpo Official Instagram (oliviaculpo)Olivia Culpo opened up about her dating life to Us Weekly. “What I struggle with is the people are, like … they don’t love you for you,” Culpo told Us Weekly exclusively.Her side of the story. Olivia Culpo slammed famous men who, despite being married, contacted her after her split from boyfriend Danny Amendola. Olivia Culpo may have some unfortunate incidents with men, but we hope she doesn’t write them off completely. The former beauty queen has had experiences with her exes that may not be categorized as pleasant. Especially Danny Amendola, who may not have taken too kindly to their break up. Well it does look like Olivia Culpo isn’t letting anyone hold her back. We have to say, she looks gorgeous in her Instagram snaps. You can check out the pics here:  Olivia CulpoOlivia Culpo Official Instagram (oliviaculpo) Olivia CulpoOlivia Culpo Instagramlast_img read more

OC Moazzem denied bail sent to jail

first_imgSonagazi police station former officer-in-charge Moazzem Hossain is produced before the court on Monday. Potho: Prothom Alo -5d07651c581f9A Dhaka court on Monday sent former officer-in-charge of Sonagazi police station Moazzem Hossain to jail, rejecting his bail petition in a case filed under the Digital Security Act.Judge of Dhaka Cyber Tribunal Ash-Shams Jaglul Hussein passed the order when Faruk Ahmed, a lawyer for Moazzem Hossain, sought bail for him, reports UNB.The court also fixed 30 June for hearing on charge framing in the case.Earlier, police arrested Moazzem Hossain from the High Court area on Sunday in a case filed over circulating a video on social media on Feni madrasa girl Nusrat Jahan Rafi’s statement at Sonagazi police station.On 27 May, a Dhaka court issued an arrest warrant against Moazzem Hossain in the case. But he could not be traced after the issuance of warrant for his arrest.Allegations brought against Moazzem after Nusrat Jahan Rafi’s murder was found to be authentic by the Police Bureau of Investigation (PBI).Supreme Court lawyer Syeddul Haque Sumon filed the case against Moazzem under the Digital Security Act on 15 May.The former OC was accused of illegally interrogating Nusrat and recording the episode on his phone. The video was later circulated on the social media.Moazzem summoned Sonagazi Islamia Senior Fazil Madrasa’s principal Sirajud Doula and Nusrat to the police station after the latter accused her teacher of sexually harassing her.The OC interrogated Nusrat in the absence of her lawyer or any other woman. She was crying but the OC paid no heed and kept questioning her using inappropriate language.Nusrat’s mother had filed a case with Sonagazi police station over her daughter’s sexual harassment. On 6 April, the girl was set on fire at an exam centre in Sonagazi upazila, by people loyal to Siraj Ud Doula. She died on 10 April at Dhaka Medical College and Hospital.Moazzem was withdrawn the same day and suspended a month later on charge of negligence of duty.last_img read more

Questioning facebook freedom through art

first_imgArtist Puja Kshatriya is presenting her work Facebook of Reclaimed Identities, her new series of small formatted paintings soon at the India Habitat Centre. Her works are done on canvas with oil and  acrylic. She has also used the scratching technique where one adds scratches with blade to add effects to the strokes. Compared to Puja’s earlier works her recent paintings are small in size. Most of these works not more than three feet in size resemble the that of the frame of a computer screen. Also Read – ‘Playing Jojo was emotionally exhausting’ The images are those of flowers and faces of children. The irony that Puja wants to build up in this series becomes palpable when one comes to know that these faces belong to those children who do not have any access to Facebook or related activities. They may be featured in Facebook through somebody’s agency and in fact without their knowledge. The image infested realm of Facebook often uses and abuses the identity of people who are randomly photographed without consent, credit or remuneration. Seen against this context of Facebook abuse, Puja’s works speak of the realms and identities that are incapacitated by the overuse of the medium. Hence, Puja’s works open up a critical body of paintings that suggestively questions the so called Facebook freedom. Inversely, the artist acknowledges the medium’s power to give a face to those people who otherwise will never have a face in the world of internet. Also Read – Leslie doing new comedy special with NetflixFacebook for Puja Kshatriya is an operative metaphor in her works. She portrays faces and events through emblematic registrations. Pursuing her passion for the arts over the last forty years, Puja has earned great admiration and accolades for her work, with exhibitions in Dubai, Jakarta, London and Singapore amongst others.Departing from the traditional style of painting, along with acrylics, Puja uses the blade scraping technique, where in two-three layers of oil colours are applied and then the blade is used to bring out the forms. The pressure while scraping is varied. This technique gives a sculptural effect to the figures.last_img read more

These Entrepreneurs Are Taking on Bias in Artificial Intelligence

first_img Growing a business sometimes requires thinking outside the box. Free Webinar | Sept. 9: The Entrepreneur’s Playbook for Going Global Register Now » 15+ min readcenter_img September 5, 2018 Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life.”Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL.AI has also been touted as the new must-have for business, for everything from customer service to marketing to IT. However, for all its usefulness, AI also has a dark side. In many cases, the algorithms are biased.Related: What Is AI, Anyway? Know Your Stuff With This Go-To Guide.Some of the examples of bias are blatant, such as Google’s facial recognition tool tagging black faces as gorillas or an algorithm used by law enforcement to predict recidivism disproportionately flagging people of color. Others are more subtle. When Beauty.AI held an online contest judged by an algorithm, the vast majority of “winners” were light-skinned. Search Google for images of “unprofessional hair” and the results you see will mostly be pictures of black women (even searching for “man” or “woman” brings back images of mostly white individuals).While more light has been shined on the problem recently, some feel it’s not an issue addressed enough in the broader tech community, let alone in research at universities or the government and law enforcement agencies that implement AI.”Fundamentally, bias, if not addressed, becomes the Achilles’ heel that eventually kills artificial intelligence,” says Chad Steelberg, CEO of Veritone. “You can’t have machines where their perception and recommendation of the world is skewed in a way that makes its decision process a non-sequitur from action. From just a basic economic perspective and a belief that you want AI to be a powerful component to the future, you have to solve this problem.”As artificial intelligence becomes ever more pervasive in our everyday lives, there is now a small but growing community of entrepreneurs, data scientists and researchers working to tackle the issue of bias in AI. I spoke to a few of them to learn more about the ongoing challenges and possible solutions.Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic AuditingSolution: Algorithm auditingBack in the early 2010s, Cathy O’Neil was working as a data scientist in advertising technology, building algorithms that determined what ads users saw as they surfed the web. The inputs for the algorithms included innocuous-seeming information like what search terms someone used or what kind of computer they owned.However, O’Neil came to realize that she was actually creating demographic profiles of users. Although gender and race were not explicit inputs, O’Neil’s algorithms were discriminating against users of certain backgrounds, based on the other cues.As O’Neil began talking to colleagues in other industries, she found this to be fairly standard practice. These biased algorithms weren’t just deciding what ads a user saw, but arguably more consequential decisions, such as who got hired or whether someone would be approved for a credit card. (These observations have since been studied and confirmed by O’Neil and others.)What’s more, in some industries — for example, housing — if a human were to make decisions based on the specific set of criteria, it likely would be illegal due to anti-discrimination laws. But, because an algorithm was deciding, and gender and race were not explicitly the factors, it was assumed the decision was impartial.”I had left the finance [world] because I wanted to do better than take advantage of a system just because I could,” O’Neil says. “I’d entered data science thinking that it was less like that. I realized it was just taking advantage in a similar way to the way finance had been doing it. Yet, people were still thinking that everything was great back in 2012. That they were making the world a better place.”O’Neil walked away from her adtech job. She wrote a book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, about the perils of letting algorithms run the world, and started consulting.Eventually, she settled on a niche: auditing algorithms.”I have to admit that it wasn’t until maybe 2014 or 2015 that I realized this is also a business opportunity,” O’Neil says.Right before the election in 2016, that realization led her to found O’Neil Risk Consulting & Algorithmic Auditing (ORCAA).”I started it because I realized that even if people wanted to stop that unfair or discriminatory practices then they wouldn’t actually know how to do it,” O’Neil says. “I didn’t actually know. I didn’t have good advice to give them.” But, she wanted to figure it out.So, what does it mean to audit an algorithm?”The most high-level answer to that is it means to broaden our definition of what it means for an algorithm to work,” O’Neil says.Often, companies will say an algorithm is working if it’s accurate, effective or increasing profits, but for O’Neil, that shouldn’t be enough.”So, when I say I want to audit your algorithm, it means I want to delve into what it is doing to all the stakeholders in the system in which you work, in the context in which you work,” O’Neil says. “And the stakeholders aren’t just the company building it, aren’t just for the company deploying it. It includes the target for the algorithm, so the people that are being assessed. It might even include their children. I want to think bigger. I want to think more about externalities, unforeseen consequences. I want to think more about the future.”For example, Facebook’s News Feed algorithm is very good at encouraging engagement and keeping users on its site. However, there’s also evidence it reinforces users’ beliefs, rather than promoting dialog, and has contributed to ethnic cleansing. While that may not be evidence of bias, it’s certainly not a net positive.Right now, ORCAA’s clients are companies that ask for their algorithms to be audited because they want a third party — such as an investor, client or the general public — to trust it. For example, O’Neil has audited an internal Siemens project and New York-based Rentlogic’s landlord rating system algorithm. These types of clients are generally already on the right track and simply want a third-party stamp of approval.However, O’Neil’s dream clients would be those who don’t necessarily want her there.”I’m going to be working with them because some amount of pressure, whether it’s regulatory or litigation or some public relations pressure kind of forces their hand and they invite me in,” O’Neil says.Most tech companies pursue profit above all else, O’Neil says, and won’t seriously address the issue of bias unless there are consequences. She feels that existing anti-discrimination protections need to be enforced in the age of AI.”The regulators don’t know how to do this stuff,” O’Neil says. “I would like to give them tools. But, I have to build them first. … We basically built a bunch of algorithms assuming they work perfectly, and now it’s time to start building tools to test whether they’re working at all.”Related: Artificial Intelligence Is Likely to Make a Career in Finance, Medicine or Law a Lot Less LucrativeFrida Polli, co-founder and CEO of PymetricsSolution: Open source AI auditingMany thought artificial intelligence would solve the problem of bias in hiring, by making sure human evaluators weren’t prejudging candidates based on the name they saw on a resume or the applicant’s appearance. However, some argue hiring algorithms end up perpetuating the biases of their creators.Pymetrics is one company that develops algorithms to help clients fill job openings based on the traits of high-performing existing employees. It believes it’s found a solution to the bias problem in an in-house auditing tool, and now it’s sharing the tool with the world.Co-founder and CEO Frida Polli stresses that fighting bias was actually a secondary goal for Pymetrics.”We’re not a diversity-first platform,” Polli says. “We are a predictive analytics platform.”However, after seeing that many of her clients’ employee examples used to train Pymetrics’s algorithms were not diverse, combating bias became important.”Either you do that or you’re actually perpetuating bias,” Polli says. “So, we decided we certainly were not going to perpetuate bias.”Early on, the company developed Audit AI to make sure its algorithms were as neutral as possible when it came to factors including gender and race. If a company looking to fill a sales role had a sales team that was predominantly white and male, an unaudited algorithm might pick a candidate with those same traits. Polli was quick to point out that Audit AI would also recommend adjustments if an algorithm was weighted in favor of women or people of color.Some critics say if you tweak a hiring algorithm to remove bias you’re lowering the bar, but Polli disagrees.”It’s the age-old criticism that’s like, ‘oh well, you’re not getting the best candidate,'” Polli says. “‘You’re just getting the most diverse candidate, because now you’ve lowered how well your algorithm is working.’ What’s really awesome is that we don’t see that. We have not seen this tradeoff at all.”In May, Pymetrics published the code for its internal Audit AI auditing tool on Github. Polli says the first goal for making Audit AI open source is to encourage others to develop auditing techniques for their algorithms.”If they can learn something from the way that we’re doing it that’s great. Obviously there are many ways to do it but we’re not saying ours is the only way or the best way.”Other motivations include simply starting a conversation about the issue and potentially learning from other developers who may be able to improve Audit AI.”We certainly don’t believe in sort of proprietary debiasing because that would sort of defeat the purpose,” Polli says.”The industry just needs to be more comfortable in actually realizing that if you’re not checking your machine learning algorithms and you’re saying, ‘I don’t know whether they cause bias,’ I just don’t think that that should be acceptable,” she says. “Because it’s like the ostrich in the sand approach.”Related: The Scariest Thing About AI Is the Competitive Disadvantage of Being Slow to AdaptRediet Abebe, co-founder of Black in AI and Mechanism Design for Social GoodSolution: Promoting diverse AI programmers and researchers Use of facial recognition has grown dramatically in recent years — whether it’s for unlocking your phone, expediting identification at the airport or scanning faces in a crowd to find potential criminals. But, it’s also prone to bias.MIT Media Lab researcher Joy Buolamwini and Timnit Gehru, who received her PhD from the Stanford Artificial Intelligence Laboratory, found that facial recognition tools from IBM, Microsoft and Face++ accurately identified the gender of white men almost 100 percent of the time, but failed to identify darker skinned women in 20 percent to 34 percent of cases. That could be because the training sets themselves were biased: The two also found that the images used to train one of the facial recognition tools were 77 percent male and more than 83 percent white.One reason machine learning algorithms end up being biased is that they reflect the biases — whether conscious or unconscious — of the developers who built them. The tech industry as a whole is predominantly white and male, and one study by TechEmergence found women make up only 18 percent of C-level roles at AI and machine learning companies.Some in the industry are trying to change that.In March 2017, a small group of computer science researchers started a community called Black in AI because of an “alarming absence of black researchers,” says co-founder Rediet Abebe, a PhD candidate in computer science at Cornell University. (Gehru is also a co-founder.)”In the conferences that I normally attend there’s often no black people. I’d be the only black person,” Abebe says. “We realized that this was potentially a problem, especially since AI technologies are impacting our day-to-day lives and they’re involved in decision-making and a lot of different domains,” including criminal justice, hiring, housing applications and even what ads you see online.”All these things are now being increasingly impacted by AI technologies, and when you have a group of people that maybe have similar backgrounds or correlated experiences, that might impact the kinds of problems that you might work on and the kind of products that you put out there,” Abebe says. “We felt that the lack of black people in AI was potentially detrimental to how AI technologies might impact black people’s lives.”Adebe feels particularly passionate about including more African women in AI; growing up in Ethiopia, a career in the sciences didn’t seem like a possibility, unless she went into medicine. Her own research focuses on how certain communities are underserved or understudied when it comes to studying societal issues — for example, there is a lack of accurate data on HIV/AIDS deaths in developing countries — and how AI can be used to address those discrepancies. Adebe is also the co-founder and co-organizer of Mechanism Design for Social Good, an interdisciplinary initiative that shares research on AI’s use in confronting similar societal challenges through workshops and meetings.Initially, Abebe thought Black in AI would be able to rent a van to fit all the people in the group, but Black in AI’s Facebook group and email list has swollen to more than 800 people, from all over the world. While the majority of members are students or researchers, the group also includes entrepreneurs and engineers.Black in AI’s biggest initiative to date was a workshop at the Conference on Neural Information Processing Systems (NIPS) in December 2017 that garnered about 200 attendees. Thanks to partners such as Facebook, Google and ElementAI, the group was able to give out over $150,000 in travel grants to attendees.Abebe says a highlight of the workshop was a keynote talk by Haben Girma, the first deaf/blind graduate from Harvard Law School, which got Abebe thinking about other types of diversity and intersectionality.Black in AI is currently planning its second NIPS workshop.As part of the more informal discussions happening in the group’s forums and Facebook group, members have applied and been accepted to Cornell’s graduate programs, research collaborations have started and industry allies have stepped forward to ask how they can help. Black in AI hopes to set up a mentoring program for members.Related: Why Are Some Bots Racist? Look at the Humans Who Taught Them.Tess Posner, CEO of AI4ALLSolution: Introducing AI to diverse high schoolersThe nonprofit AI4ALL is targeting the next generation of AI whiz kids. Through summer programs at prestigious universities, AI4ALL exposes girls, low-income students, racial minorities and those from diverse geographic backgrounds to the possibilities of AI.”It’s becoming ubiquitous and invisible,” says Tess Posner, who joined AI4ALL as founding CEO in 2017. “Yet, right now it’s being developed by a homogenous group of technologists mostly. This is leading to negative impacts like race and gender bias getting incorporated into AI and machine learning systems. The lack of diversity is really a root cause for this.”She adds, “The other piece of it is we believe that this technology has such exciting potential to be addressed to solving some key issues or key problems facing the world today, for example in health care or in environmental issues, in education. And it has incredibly positive potential for good.”Started as a pilot at Stanford University in 2015 as a summer camp for girls, AI4ALL now offers programs at six universities around the country: University of California Berkeley, Boston University, Carnegie Mellon University, Princeton University, Simon Fraser University and Stanford.Participants receive a mix of technical training, hands-on learning, demos of real-world applications (such as a self-driving car), mentorship and exposure to experts in the field. This year, guest speakers included representatives from big tech firms including Tesla, Google and Microsoft, as well as startups including H20.ai, Mobileye and Argo AI.The universities provide three to five “AI for good” projects for students to work on during the program. Recent examples include developing algorithms to identify fake news, predict the infection path of the flu and map poverty in Uganda.For many participants, the AI4ALL summer program is only the beginning.”We talk about wanting to create future leaders in AI, not just future creators, that can really shape what the future of this technology can bring,” Posner says.AI4ALL recently piloted an AI fellowship program for summer program graduates to receive funding and mentorship to work on their own projects. One student’s project involved tracking wildfires on the West Coast, while another looked at how to optimize ambulance dispatches based on the severity of the call after her grandmother died because an ambulance didn’t reach her in time.Other graduates have gone on to create their own ventures after finishing the program, and AI4ALL provides “seed grants” to help them get started. Often, these ventures involve exposing other kids like themselves to AI. For example, three alumni started a workshop series called creAIte to teach middle school girls about AI and computer science using neural art, while another runs an after school workshop called Girls Explore Tech.Another graduate co-authored a paper on using AI to improve surgeons’ technique that won an award at NIPS’s Machine Learning for Health workshop in 2017.”We have a lot of industry partners who have seen our students’ projects and they go, ‘Wow. I can’t believe how amazing and rigorous and advanced this project is.’ And it kind of changes people’s minds about what talent looks like and who the face of AI really is,” Posner says.Last month, AI4ALL announced it will be expanding its reach in a big way: The organization received a $1 million grant from Google to create a free digital version of its curriculum, set to launch in early 2019.Related: Artificial Intelligence May Reflect the Unfair World We Live inChad Steelberg, co-founder and CEO of VeritoneSolution: Building the next generation of AISerial entrepreneur Chad Steelberg first got involved in AI during his high school years in the 1980s, when he worked on algorithms to predict the three-dimensional structures of proteins. At the time, he felt AI’s capabilities had reached a plateau, and he ended up starting multiple companies in different arenas, one of which he sold to Google in 2006.A few years later, Steelberg heard from some friends at Google that AI was about to take a huge leap forward — algorithms that could actually understand and make decisions, rather than simply compute data and spit back a result. Steelberg saw the potential, and he invested $10 million of his own money to found Veritone.Veritone’s aiWARE is an operating system for AI. Instead of communicating between the software and hardware in a computer, like a traditional operating system, it takes users’ queries — such as “transcribe this audio clip” — and finds the best algorithm available to process that query, whether that’s Google Cloud Speech-to-Text, Nuance or some other transcription engine. As of now, aiWARE can scan more than 200 models in 16 categories, from translation to facial recognition.Algorithms work best when they have a sufficiently narrow training set. For example, if you try to train one algorithm to play go, chess and checkers, it will fail at all three, Steelberg says. Veritone tells the companies it works with to create algorithms for very narrow use cases, such as images of faces in profile. AiWARE will find the right algorithm for the specific query, and can even trigger multiple algorithms for the same query. Steelberg says when an audio clip uses multiple languages, the translations aiWARE returns are 15 percent to 20 percent more accurate than the best single engine on the platform.Algorithms designed for parsing text and speech, such as transcription and translation, are another area prone to bias. One study found algorithms categorized written African American vernacular English as “not English” at high rates, while a Washington Post investigation found voice assistants such as Amazon’s Alexa have a hard time deciphering accented English.Though it wasn’t built to eliminate bias, aiWARE ends up doing exactly that, Steelberg says. Just like the human brain is capable of taking all of its learned information and picking the best response to each situation, aiWARE learns which model (or models) is most appropriate to use for each query.”We use our aiWARE to arbitrate and evaluate each of those models as to what they believe the right answer is, and then aiWARE is learning to choose which set of models to trust at every single point along the curve,” Steelberg says.It’s not an issue if an algorithm is biased. “What’s problematic is when you try to solve the problem with one big, monolithic model,” Steelberg says. AiWARE is learning to recognize which models are biased and how, and work around those biases.Another factor that results in biased AI is that many algorithms will ignore small subsets of a training set. If in a data set of 1 million entries, there are three that are different, you can still achieve a high degree of accuracy overall while performing horribly on certain queries. This is often the reason facial recognition software fails to recognize people of color: The training set contained mostly images of white faces.Veritone tells companies to break down training sets into micro models, and then aiWARE can interpolate to create similar examples.”You’re essentially inflating that population, and you can train models now on an inflated population that learn that process,” Steelberg says.Using small training sets, aiWARE can build models for facial recognition with accuracy in the high 90th percentile for whatever particular subcategory a client is interested in (e.g., all the employees at your company), he says.Steelberg says he believes an intelligent AI like aiWARE has a much better chance of eliminating bias than a human auditor. For one, humans will likely have a hard time identifying flawed training sets. They also might bring their own biases to the process.And for larger AI models, which might encompass “tens of millions of petabytes of data,” a human auditor is just impractical, Steelberg says. “The sheer size makes it inconceivable.”last_img read more