Facial recognition Facial recognition is going to be everywhere Share your voice Tags Gender and race pose a challenge for facial recognition. Studies have shown the technology has a harder time identifying women and people with darker skin. Civil rights advocates warn that the shortcomings could adversely affect minorities. Several airports and airlines have rolled out the biometric tech across the US, offering a faster way to board your flights. The technology scans a traveler’s face and matches it with a passport photo provided to the airlines by the State Department. It’ll be used in the top 20 US airports by 2021. CBP says it has a match rate in the high 90th percentile, while a study from the DHS’ Office of Inspector General found that it had a match rate closer to 85%. Customs and Border Protection says the system is getting better. A spokesman for the agency noted that the OIG study drew from a demo in 2017 that looked at the potential for the Traveler Verification Service.”In the current deployment of TVS,” the spokesman said, “CBP has been able to successfully photograph and match over 98% of travelers who have photos in U.S. Government systems.”In addition, CBP is working with the National Institutes of Standards and Technology to analyze the performance of face-matching tech, “including impacts due to traveler demographics and image quality,” the spokesman said. A lack of diverse data is what led to racial bias with facial recognition to begin with. Experts have suggested that photo databases for facial recognition could be using more images of white people than people of color, which skews how effective the technology is for minorities. Jake Laperruque, a senior counsel at the Constitution Project, is concerned that the agency is turning a blind eye to the potential for racial bias at airports.”The comments reflect a troubling lack of concern about well-documented problem of facial recognition systems having higher error rates for people of color,” Laperruque said in an email. “CBP can’t simply ignore a serious issue and take a ‘see no evil approach’ — if they’re not willing to confront serious civil rights problems and deal with them, they shouldn’t be trusted to operate a program like this.” Originally published May 6.Updated May 8: Added comment from a CBP spokesman. 10 Comments A woman boarding an SAS flight to Copenhagen goes through facial recognition verification system VeriScan at Dulles International Airport in Virginia. Jim Watson / AFP/Getty Images Facial recognition technology is prone to errors, and when it comes to racial bias at airports, there’s a good chance it’s not learning from its mistakes. Debra Danisek, a privacy officer with US Customs and Border Protection, talked to an audience Friday at the International Association of Privacy Professionals Summit about what data its facial recognition tech collects — but more importantly, what data it doesn’t collect.”In terms of ‘Does this technology have a different impact on different racial groups?’ we don’t collect that sort of data,” Danisek said. “In terms of keeping metrics on which groups are more affected, we wouldn’t have those metrics to begin with.” In other words, while the CBP does collect data that’s available on people’s passports — age, gender and citizenship — to help improve its facial recognition algorithm, it doesn’t gather data for race and ethnicity, even when a passenger is misidentified.So the CBP doesn’t know when there’s a mismatch based on a person’s skin color. It’s relying on reports from the Department of Homeland Security’s Redress program to identify when that happens. “If they notice we have a pattern of folks making complaints this process, then we would investigate,” Danisek said. Now playing: Watch this: Politics Security 3:28
Tags Tech Industry Mobile 3:02 0 Microsoft’s FarmBeats program uses the company’s Azure cloud to connect agricultural devices and generate data intended to help farms operate more efficiently. Sensors embedded in the soil use the cloud to communicate with drones that circle farms to direct irrigation patterns and herbicide distribution and to optimize the harvesting of crops. CNET visited Microsoft’s Cloud Collaboration Center in Redmond, Washington, to learn more about how the cloud, AI and the internet of things (IoT) are transforming business.”We use machine learning and image recognition to understand how our crops are growing,” said Jason Zander, executive vice president of Microsoft Azure. “Today a lot of irrigation systems … just kind of throw water everywhere. Being able to leverage drones and some of these sensors means we save water and get better production out of [farms].”The cloud is evolving rapidly, said Zander. “A decade ago the cloud helped mobile phones become ubiquitous. Today, the cloud becomes really powerful when it helps other emerging technologies like AI and IoT. It’s exciting because we can help entire [business] sectors quickly become more efficient.” Now playing: Watch this: Artificial intelligence (AI) Drones Cloud computing Microsoft Share your voice Post a comment The farm of the future is in the cloud Your next salad might be grown in the cloud and served with a side of artificial intelligence.Cloud computing, a technology that relies on clustered servers positioned across the globe, supports everything from drones to machine learning and the smart home. Using cloud tech, farms are about to become a lot smarter as well.
Dhubri Police rn. pic.twitter.com/4JTm7QsdpW— meghnad (@Memeghnad) June 5, 2019 (Representational image)flickrAssam Police used their humour to good use by tweeting a lost and found photo for 590kg of marijuana.On Tuesday, Assam Police had tweeted out an image of bundles of drugs with the caption, “Anyone lost a huge (590 kgs) amount of Cannabis/Ganja and a truck in and around Chagolia Checkpoint last night? Don’t panic, we found it. Please get in touch with @Dhubri_Police. They will help you out, for sure 😉 Great job Team Dhubri.” Twitter/@Assam PoliceAssam Police made fun of the people who lost their stash and also congratulated the Dhubri police for finding and seizing the marijuana at the Chagolia checkpoint.The Assam police had a very busy 2019 till now with multiple drug busts in the state with the latest in March where the criminal investigation department had discovered a godown in Guwahati, storing huge bundles of psychotropic drugs.In Itanagar, the CID had cracked down another a heroin racket in Itanagar.The humorous and cheeky tweet by Assam Police had social media in stitches and the post received over 16k likes and 6k retweets.
ReutersA week after the United Nations report on human rights violations in Kashmir was published, the Ministry of Home Affairs (MHA) on Tuesday, July 16, released data on the number of militant infiltration in J&K.Minister of Home Affairs G. Kishan Reddy said that the number of armed infiltration in the valley has increased from 119 in 2016 to 143 attacks in 2018. The number of militants killed also increased from 35 in 2016 to 59 killed in 2017. The data claims that 32 militants were killed in military operations in 2018, reported ANI.Reddy also revealed that the number of security officials killed in military conflict has decreased from 15 personnel in 2016 to five in 2018.The Ministry said that the armed conflict in Kashmir valley is mainly due to cross-border sponsored terror activities and only four militants have been arrested since 2016. The MHA data showed that militant infiltration in the valley reduced by 43 per cent in the first half of 2019.On Tuesday, July 16, Union Minister Manish Tiwari submitted a Zero Hour notice in the Lok Sabha to discuss the UN report on human violation in Kashmir. India has rejected the claims made by the report, which said that civilian casualties due to the armed conflict in Kashmir in 2018 were the recorded in a decade.Ministry of External Affairs (MEA) spokesperson Raveesh Kumar slammed the report and called it “false and motivated.” He condemned the UN body for its ‘failure’ to recognise the independent judiciary, human rights institutions and other mechanisms in the state that “safeguard, protect and promote constitutionally guaranteed fundamental rights to all citizens of India.”UN Report on human rights violationStating that 160 civilians were killed in 2018, the report cited data collected by Jammu and Kashmir Coalition of Civil Society (JKCCS). It stated that around 586 people, including 267 militants and 159 Indian security personnel, were killed due to the violent conflict in the region. The number is the highest since 2008.Official data published by the Indian Union Ministry for Home Affairs was also accused by UN of deliberately stating “lower casualty figures, citing 37 civilians, 238 terrorists and 86 security personnel killed in the 11 months up to 2 December 2018.”CriticismIt was claimed that the report did not appropriately substantiate arguments of security officials violating human rights in Kashmir. The report was accused of being soft on Pakistan despite its evident role in cross-border terrorism.It was highlighted that the UN report said NGOs, human rights defenders and journalist in India are able to operate and document the ongoing human rights violations. But in Pakistan occupied Kashmir and Gilgit-Baltistan region “restrictions on the freedoms of expression, opinion, peaceful assembly and association” have obstructed the ability to monitor human rights abuses. IBTimes VideoRelated VideosMore videos Play VideoPauseMute0:00/3:58Loaded: 0%0:00Progress: 0%Stream TypeLIVE-3:58?Playback Rate1xChaptersChaptersDescriptionsdescriptions off, selectedSubtitlessubtitles settings, opens subtitles settings dialogsubtitles off, selectedAudio Trackdefault, selectedFullscreenThis is a modal window.Beginning of dialog window. Escape will cancel and close the window.TextColorWhiteBlackRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentBackgroundColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentTransparentWindowColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyTransparentSemi-TransparentOpaqueFont Size50%75%100%125%150%175%200%300%400%Text Edge StyleNoneRaisedDepressedUniformDropshadowFont FamilyProportional Sans-SerifMonospace Sans-SerifProportional SerifMonospace SerifCasualScriptSmall CapsReset restore all settings to the default valuesDoneClose Modal DialogEnd of dialog window. COPY LINKAD Loading … Close Our Fight is for Kashmir not against the Kashmiri’s says Narendra Modi
Bus torched over road accident in Ctg. Photo: Shourav Das/Prothom AloAgitated locals set fire to a bus in Solosahar area of Panchlaish in the port city of Chittagong on Tuesday after it hit a motorcycle rider, leaving him seriously injured, reports UNB.The injured motor cyclist, Selim, 55, is admitted to the Chittagong Medical College and Hospital, said assistant sub-inspector of CMCH police outpost Abdul Hamid.The accident occurred around 8:00 am in the morning, he said.A unit of Fire Service later doused the flame while no casualty was reported yet, said deputy assistant director of Chittagong Fire Service Md Jasim Uddin.
The election commission is uncertain whether it can bring amendment to the Representation of the People Order (RPO) ahead of the 11th parliamentary elections, reports UNB.”After receiving the review (report) over proposed amendments from the (EC’s) subcommittee, it will be clear if the RPO amendments can be made effective by passing in this (10th) parliament,” said EC secretary Helalauddin Ahmed while briefing reporters after a meeting of the commission on Thursday.He made the remark responding to a question whether the commission will get time to make the RPO amendment passed in the 10th parliament.When the draft amendment is finalised, the commission will be able to say whether the amendment could be passed in the next session of parliament, he added.The EC is yet to finalise the draft amendment to RPO though it was supposed to do this by December last as per its roadmap prepared for the next general election.The EC’s ‘subcommittee to reform electoral laws and rules’ placed a set of proposals regarding the RPO amendment in the EC’s meeting on 9 April.The commission in its meeting on Thursday discussed the proposals and sent those to the subcommittee for further review.The EC secretary said the subcommittee brought 35 amendment proposals for the RPO and the commissioners examined those in the meeting. The commission asked the subcommittee to review the proposals further.Helaluddin said though no timeframe was fixed for the subcommittee to place the review report, it was requested to do it as early as possible.
AP Photo/Matt SaylesIn this Feb. 11, 2007, file photo, adult film actress Stormy Daniels arrives for the 49th Annual Grammy Awards in Los Angeles. Stormy Daniels, whose real name is Stephanie Clifford, is suing President Donald Trump and wants a California judge to invalidate a nondisclosure agreement she signed days before the 2016 presidential election.An attorney for Stormy Daniels filed a motion Wednesday seeking to question President Donald Trump and his attorney under oath about a pre-election payment to the porn actress aimed at keeping her quiet about an alleged tryst with Trump.If successful, it would be the first deposition of a sitting president since Bill Clinton in 1998 had to answer questions about his conduct with women.Attorney Michael Avenatti is seeking sworn testimony from Trump and Trump’s personal lawyer, Michael Cohen, about a $130,000 payment made to Daniels days before the 2016 presidential election as part of a nondisclosure agreement she is seeking to invalidate. Avenatti’s documents were filed in U.S. District Court in California.Avenatti is part of a growing list of lawyers looking to question Trump. Attorneys for a former contestant on one of Trump’s “Apprentice” TV shows have said they want to depose the president as part of a defamation suit. And the president’s legal team continues to negotiate with special counsel Robert Mueller over the scope and terms of an interview with the presidentAvenatti wants to question Trump and Cohen for “no more than two hours.” In the filing, he says the depositions are needed to establish if Trump knew about the payment, which he refers to as a “hush agreement,” and if he consented to it.“We’re looking for sworn answers from the president and Mr. Cohen about what they knew, when they knew it and what they did about it,” Avenatti told The Associated Press.While he noted that “in every case you always have to be open to settlement,” Avenatti said that “at this point we don’t see how this case would possibly be settled.”In a statement to CBS, Cohen’s attorney David Schwartz called the filing a “reckless use of the legal system in order to continue to inflate Michael Avenatti’s deflated ego and keep himself relevant.”The White House, which has said Trump denies the relationship, did not immediately respond to requests for comment.A former businessman, Trump is no stranger to high-stakes litigation, sitting for depositions in contract and defamation lawsuits over the years. Those interviews show his deep experience in giving statements to lawyers, but also show a witness who could be voluble, boastful and, at times, combative.Georgetown University law professor Naomi Mezey said a deposition presented risks because it is the way to get the president in a vulnerable position. “And President Trump is a particularly vulnerable president,” Mezey said.Daniels, whose legal name is Stephanie Clifford, detailed her alleged 2006 tryst with Trump in a widely watched interview with CBS’ “60 Minutes” that aired Sunday. She said she’d had sex with him once, shortly after Trump’s wife, Melania, gave birth to the president’s youngest son.She also said that a man approached her in a Las Vegas parking lot in 2011 when she was with her infant daughter, and threatened her with physical harm if she went public with her story.The interview prompted a new flurry of legal action, with a lawyer for Cohen demanding that Daniels publicly apologize to his client for suggesting he was involved in her intimidation. Daniels responded by filing a revised federal lawsuit accusing Cohen of defamation.Cohen has said he paid the $130,000 out of his own pocket and that neither the Trump Organization nor the Trump campaign was a party to the transaction. Avenatti has argued that the “hush agreement” Daniels signed in October 2016 is invalid because it was not signed by Trump.A hearing before Judge S. James Otero in the federal court’s Central District in Los Angeles is set for April 30.As precedent, the motion notes that Clinton was deposed while in office in 1998 during Paula Jones’ sexual harassment suit. That came after the Supreme Court ruled that a sitting president was not immune from civil litigation on something that happened before taking office and was unrelated to the office.Jones’ case was dismissed by a judge and then appealed. The appeal was still pending when Clinton agreed to pay $850,000 to Jones to settle the case. He did not admit wrongdoing. Later in 1998, Clinton also gave grand jury testimony about his relationship with White House intern Monica Lewinsky. Share
More information: Dean R. Lomax et al. An 8.5 m long ammonite drag mark from the Upper Jurassic Solnhofen Lithographic Limestones, Germany, PLOS ONE (2017). DOI: 10.1371/journal.pone.0175426AbstractTrackways and tracemakers preserved together in the fossil record are rare. However, the co-occurrence of a drag mark, together with the dead animal that produced it, is exceptional. Here, we describe an 8.5 m long ammonite drag mark complete with the preserved ammonite shell (Subplanites rueppellianus) at its end. Previously recorded examples preserve ammonites with drag marks of < 1 m. The specimen was recovered from a quarry near Solnhofen, southern Germany. The drag mark consists of continuous parallel ridges and furrows produced by the ribs of the ammonite shell as it drifted just above the sediment surface, and does not reflect behaviour of the living animal. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Journal information: PLoS ONE Five-meter sea creature found off California coast Explore further Citation: 'Death drag' of ancient ammonite fossil digitized and put online (2017, May 11) retrieved 18 August 2019 from https://phys.org/news/2017-05-death-ancient-ammonite-fossil-digitized.html A death drag is a mark left behind by a creature that recently died and was moved or dragged by another force—in this case, it was an ammonite, a mollusk with a spiral shell that lived in the sea approximately 150 million years ago. It was dragged along the sea floor after it died by the sea current and left behind a very shallow trench. Finding a death drag from a creature millions of years ago is very rare, of course, because it requires a very specific set of circumstances to occur for preservation and discovery. In this case, it was a team of paleontologists digging at a quarry back in the 1990s at a site near the town of Solnhofen in Germany—many other ancient fossils have been found there. The ammonite and its death drag were preserved and were eventually put on display in a museum in Barcelona.The death drag is approximately 8.5 meters long and grows more defined the closer it gets to the ammonite fossil. Prior research has suggested that the sea creature (which was missing its lower jaw, offering proof that it was dead prior to being dragged) was clearly quite buoyant when it began scraping the bottom, due to decomposition gasses inside of its shell—thus, it was just barely touching the bottom and able to leave only grooves at the edges. As time passed, gas seeped from the shell and the creature was dragged more heavily through the sediment, leaving a more defined trench. Prior research also suggested the trench was likely at a depth of 20 to 60 meters and was likely created due to a gentle underwater current.In this new effort, the researchers used a technique called photogrammetry to create digitized imagery of the death drag and the fossil—hundreds of images were made from multiple angles which were all stitched together to create a 3-D model. The result is a model available for download or online in video format. (Phys.org)—A team of workers with members from institutions in the U.K., Germany and Spain has put online a digitized 3-D model of the “death drag” of an ammonite fossil—it is one of the longest ever found for such an ancient creature. They have also written a paper describing both the death drag and fossil and have posted it on the open access site PLOS ONE. The ammonite Subplanites rueppellianus, the producer of the drag mark (MCFO 0492). Credit: PLOS ONE (2017). DOI: 10.1371/journal.pone.0175426 © 2017 Phys.org
Consuming foods such as bananas, potatoes, grains and legumes that are rich in resistant starch may help check blood sugar, enhance satiety as well as improve gut health, a study has found.Resistant starch is a form of starch that is not digested in the small intestine and is therefore considered a type of dietary fibre.“We know that adequate fibre intake – at least 30 grams per day – is important for achieving a healthy, balanced diet, which reduces the risk of developing a range of chronic diseases,” said Stacey Lockyer, Nutrition Scientist at British Nutrition Foundation, a Britain-based charity. Also Read – Add new books to your shelfApart from occurring naturally in foods, resistant starch is also produced or modified commercially and incorporated into food products. Unlike the typical starch, resistant starch acts like a type of fibre in the body as it does not get digested in the small intestine, but is is fermented in the large intestine.This dietary fibre then increases the production of short chain fatty acids in the gut, which act as an energy source for the colonic cells, thus improving the gut health and increasing satiety. According to the researchers, there is consistent evidence that consumption of resistant starch can aid blood sugar control. It has also been suggested that resistant starch can support gut health and enhance satiety via increased production of short chain fatty acids.“Whilst findings support positive effects on some markers, further research is needed in most areas to establish whether consuming resistant starch can confer significant benefits that are relevant to the general population. However, this is definitely an exciting area of nutritional research for the future,” Lockyer said.The study was published in the journal Nutrition Bulletin.
Practising Yoga during pregnancy gives the ability to stay calm and eases most physical problems, said Maneka Sanjay Gandhi, Minister for Women and Child Development, who attended a yoga session with pregnant women in National Institute of Public Cooperation and Child Development(NIPCCD), New Delhi.The Ministry of Women and Child Development celebrated 4th International Day of Yoga through various activities. During the session, Gandhi interacted with the expecting mothers who shared their experiences of practising prenatal yoga. Also Read – Add new books to your shelfThe Minister performed asanas along with the pregnant women under the guidance of yoga trainer so as to encourage the practice of prenatal yoga. She emphasised the importance of yoga for pregnant women. However, the prenatal yoga must be practised only under qualified instructors, the Minister stressed.She further added that making Yoga an integral part of life has holistic benefits and it can help especially pregnant mothers by giving the ability to stay calm and eases most physical problems during the nine months. Also Read – Over 2 hours screen time daily will make your kids impulsiveAfter participating in the prenatal yoga session, the Minister also said that regularly practising prenatal yoga can help in preparing the women’s body for normal delivery. She shared her own daily yoga routine and urged people to perform yoga for staying healthy and happy adding how Pranayama has been found to have exceptional benefits during pregnancyBesides Gandhi, other officials of the Ministry, led by Secretary Rakesh Srivastava also enthusiastically participated in the yoga session.
Growing a business sometimes requires thinking outside the box. Free Webinar | Sept. 9: The Entrepreneur’s Playbook for Going Global Register Now » 15+ min read September 5, 2018 Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life.”Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL.AI has also been touted as the new must-have for business, for everything from customer service to marketing to IT. However, for all its usefulness, AI also has a dark side. In many cases, the algorithms are biased.Related: What Is AI, Anyway? Know Your Stuff With This Go-To Guide.Some of the examples of bias are blatant, such as Google’s facial recognition tool tagging black faces as gorillas or an algorithm used by law enforcement to predict recidivism disproportionately flagging people of color. Others are more subtle. When Beauty.AI held an online contest judged by an algorithm, the vast majority of “winners” were light-skinned. Search Google for images of “unprofessional hair” and the results you see will mostly be pictures of black women (even searching for “man” or “woman” brings back images of mostly white individuals).While more light has been shined on the problem recently, some feel it’s not an issue addressed enough in the broader tech community, let alone in research at universities or the government and law enforcement agencies that implement AI.”Fundamentally, bias, if not addressed, becomes the Achilles’ heel that eventually kills artificial intelligence,” says Chad Steelberg, CEO of Veritone. “You can’t have machines where their perception and recommendation of the world is skewed in a way that makes its decision process a non-sequitur from action. From just a basic economic perspective and a belief that you want AI to be a powerful component to the future, you have to solve this problem.”As artificial intelligence becomes ever more pervasive in our everyday lives, there is now a small but growing community of entrepreneurs, data scientists and researchers working to tackle the issue of bias in AI. I spoke to a few of them to learn more about the ongoing challenges and possible solutions.Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic AuditingSolution: Algorithm auditingBack in the early 2010s, Cathy O’Neil was working as a data scientist in advertising technology, building algorithms that determined what ads users saw as they surfed the web. The inputs for the algorithms included innocuous-seeming information like what search terms someone used or what kind of computer they owned.However, O’Neil came to realize that she was actually creating demographic profiles of users. Although gender and race were not explicit inputs, O’Neil’s algorithms were discriminating against users of certain backgrounds, based on the other cues.As O’Neil began talking to colleagues in other industries, she found this to be fairly standard practice. These biased algorithms weren’t just deciding what ads a user saw, but arguably more consequential decisions, such as who got hired or whether someone would be approved for a credit card. (These observations have since been studied and confirmed by O’Neil and others.)What’s more, in some industries — for example, housing — if a human were to make decisions based on the specific set of criteria, it likely would be illegal due to anti-discrimination laws. But, because an algorithm was deciding, and gender and race were not explicitly the factors, it was assumed the decision was impartial.”I had left the finance [world] because I wanted to do better than take advantage of a system just because I could,” O’Neil says. “I’d entered data science thinking that it was less like that. I realized it was just taking advantage in a similar way to the way finance had been doing it. Yet, people were still thinking that everything was great back in 2012. That they were making the world a better place.”O’Neil walked away from her adtech job. She wrote a book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, about the perils of letting algorithms run the world, and started consulting.Eventually, she settled on a niche: auditing algorithms.”I have to admit that it wasn’t until maybe 2014 or 2015 that I realized this is also a business opportunity,” O’Neil says.Right before the election in 2016, that realization led her to found O’Neil Risk Consulting & Algorithmic Auditing (ORCAA).”I started it because I realized that even if people wanted to stop that unfair or discriminatory practices then they wouldn’t actually know how to do it,” O’Neil says. “I didn’t actually know. I didn’t have good advice to give them.” But, she wanted to figure it out.So, what does it mean to audit an algorithm?”The most high-level answer to that is it means to broaden our definition of what it means for an algorithm to work,” O’Neil says.Often, companies will say an algorithm is working if it’s accurate, effective or increasing profits, but for O’Neil, that shouldn’t be enough.”So, when I say I want to audit your algorithm, it means I want to delve into what it is doing to all the stakeholders in the system in which you work, in the context in which you work,” O’Neil says. “And the stakeholders aren’t just the company building it, aren’t just for the company deploying it. It includes the target for the algorithm, so the people that are being assessed. It might even include their children. I want to think bigger. I want to think more about externalities, unforeseen consequences. I want to think more about the future.”For example, Facebook’s News Feed algorithm is very good at encouraging engagement and keeping users on its site. However, there’s also evidence it reinforces users’ beliefs, rather than promoting dialog, and has contributed to ethnic cleansing. While that may not be evidence of bias, it’s certainly not a net positive.Right now, ORCAA’s clients are companies that ask for their algorithms to be audited because they want a third party — such as an investor, client or the general public — to trust it. For example, O’Neil has audited an internal Siemens project and New York-based Rentlogic’s landlord rating system algorithm. These types of clients are generally already on the right track and simply want a third-party stamp of approval.However, O’Neil’s dream clients would be those who don’t necessarily want her there.”I’m going to be working with them because some amount of pressure, whether it’s regulatory or litigation or some public relations pressure kind of forces their hand and they invite me in,” O’Neil says.Most tech companies pursue profit above all else, O’Neil says, and won’t seriously address the issue of bias unless there are consequences. She feels that existing anti-discrimination protections need to be enforced in the age of AI.”The regulators don’t know how to do this stuff,” O’Neil says. “I would like to give them tools. But, I have to build them first. … We basically built a bunch of algorithms assuming they work perfectly, and now it’s time to start building tools to test whether they’re working at all.”Related: Artificial Intelligence Is Likely to Make a Career in Finance, Medicine or Law a Lot Less LucrativeFrida Polli, co-founder and CEO of PymetricsSolution: Open source AI auditingMany thought artificial intelligence would solve the problem of bias in hiring, by making sure human evaluators weren’t prejudging candidates based on the name they saw on a resume or the applicant’s appearance. However, some argue hiring algorithms end up perpetuating the biases of their creators.Pymetrics is one company that develops algorithms to help clients fill job openings based on the traits of high-performing existing employees. It believes it’s found a solution to the bias problem in an in-house auditing tool, and now it’s sharing the tool with the world.Co-founder and CEO Frida Polli stresses that fighting bias was actually a secondary goal for Pymetrics.”We’re not a diversity-first platform,” Polli says. “We are a predictive analytics platform.”However, after seeing that many of her clients’ employee examples used to train Pymetrics’s algorithms were not diverse, combating bias became important.”Either you do that or you’re actually perpetuating bias,” Polli says. “So, we decided we certainly were not going to perpetuate bias.”Early on, the company developed Audit AI to make sure its algorithms were as neutral as possible when it came to factors including gender and race. If a company looking to fill a sales role had a sales team that was predominantly white and male, an unaudited algorithm might pick a candidate with those same traits. Polli was quick to point out that Audit AI would also recommend adjustments if an algorithm was weighted in favor of women or people of color.Some critics say if you tweak a hiring algorithm to remove bias you’re lowering the bar, but Polli disagrees.”It’s the age-old criticism that’s like, ‘oh well, you’re not getting the best candidate,'” Polli says. “‘You’re just getting the most diverse candidate, because now you’ve lowered how well your algorithm is working.’ What’s really awesome is that we don’t see that. We have not seen this tradeoff at all.”In May, Pymetrics published the code for its internal Audit AI auditing tool on Github. Polli says the first goal for making Audit AI open source is to encourage others to develop auditing techniques for their algorithms.”If they can learn something from the way that we’re doing it that’s great. Obviously there are many ways to do it but we’re not saying ours is the only way or the best way.”Other motivations include simply starting a conversation about the issue and potentially learning from other developers who may be able to improve Audit AI.”We certainly don’t believe in sort of proprietary debiasing because that would sort of defeat the purpose,” Polli says.”The industry just needs to be more comfortable in actually realizing that if you’re not checking your machine learning algorithms and you’re saying, ‘I don’t know whether they cause bias,’ I just don’t think that that should be acceptable,” she says. “Because it’s like the ostrich in the sand approach.”Related: The Scariest Thing About AI Is the Competitive Disadvantage of Being Slow to AdaptRediet Abebe, co-founder of Black in AI and Mechanism Design for Social GoodSolution: Promoting diverse AI programmers and researchers Use of facial recognition has grown dramatically in recent years — whether it’s for unlocking your phone, expediting identification at the airport or scanning faces in a crowd to find potential criminals. But, it’s also prone to bias.MIT Media Lab researcher Joy Buolamwini and Timnit Gehru, who received her PhD from the Stanford Artificial Intelligence Laboratory, found that facial recognition tools from IBM, Microsoft and Face++ accurately identified the gender of white men almost 100 percent of the time, but failed to identify darker skinned women in 20 percent to 34 percent of cases. That could be because the training sets themselves were biased: The two also found that the images used to train one of the facial recognition tools were 77 percent male and more than 83 percent white.One reason machine learning algorithms end up being biased is that they reflect the biases — whether conscious or unconscious — of the developers who built them. The tech industry as a whole is predominantly white and male, and one study by TechEmergence found women make up only 18 percent of C-level roles at AI and machine learning companies.Some in the industry are trying to change that.In March 2017, a small group of computer science researchers started a community called Black in AI because of an “alarming absence of black researchers,” says co-founder Rediet Abebe, a PhD candidate in computer science at Cornell University. (Gehru is also a co-founder.)”In the conferences that I normally attend there’s often no black people. I’d be the only black person,” Abebe says. “We realized that this was potentially a problem, especially since AI technologies are impacting our day-to-day lives and they’re involved in decision-making and a lot of different domains,” including criminal justice, hiring, housing applications and even what ads you see online.”All these things are now being increasingly impacted by AI technologies, and when you have a group of people that maybe have similar backgrounds or correlated experiences, that might impact the kinds of problems that you might work on and the kind of products that you put out there,” Abebe says. “We felt that the lack of black people in AI was potentially detrimental to how AI technologies might impact black people’s lives.”Adebe feels particularly passionate about including more African women in AI; growing up in Ethiopia, a career in the sciences didn’t seem like a possibility, unless she went into medicine. Her own research focuses on how certain communities are underserved or understudied when it comes to studying societal issues — for example, there is a lack of accurate data on HIV/AIDS deaths in developing countries — and how AI can be used to address those discrepancies. Adebe is also the co-founder and co-organizer of Mechanism Design for Social Good, an interdisciplinary initiative that shares research on AI’s use in confronting similar societal challenges through workshops and meetings.Initially, Abebe thought Black in AI would be able to rent a van to fit all the people in the group, but Black in AI’s Facebook group and email list has swollen to more than 800 people, from all over the world. While the majority of members are students or researchers, the group also includes entrepreneurs and engineers.Black in AI’s biggest initiative to date was a workshop at the Conference on Neural Information Processing Systems (NIPS) in December 2017 that garnered about 200 attendees. Thanks to partners such as Facebook, Google and ElementAI, the group was able to give out over $150,000 in travel grants to attendees.Abebe says a highlight of the workshop was a keynote talk by Haben Girma, the first deaf/blind graduate from Harvard Law School, which got Abebe thinking about other types of diversity and intersectionality.Black in AI is currently planning its second NIPS workshop.As part of the more informal discussions happening in the group’s forums and Facebook group, members have applied and been accepted to Cornell’s graduate programs, research collaborations have started and industry allies have stepped forward to ask how they can help. Black in AI hopes to set up a mentoring program for members.Related: Why Are Some Bots Racist? Look at the Humans Who Taught Them.Tess Posner, CEO of AI4ALLSolution: Introducing AI to diverse high schoolersThe nonprofit AI4ALL is targeting the next generation of AI whiz kids. Through summer programs at prestigious universities, AI4ALL exposes girls, low-income students, racial minorities and those from diverse geographic backgrounds to the possibilities of AI.”It’s becoming ubiquitous and invisible,” says Tess Posner, who joined AI4ALL as founding CEO in 2017. “Yet, right now it’s being developed by a homogenous group of technologists mostly. This is leading to negative impacts like race and gender bias getting incorporated into AI and machine learning systems. The lack of diversity is really a root cause for this.”She adds, “The other piece of it is we believe that this technology has such exciting potential to be addressed to solving some key issues or key problems facing the world today, for example in health care or in environmental issues, in education. And it has incredibly positive potential for good.”Started as a pilot at Stanford University in 2015 as a summer camp for girls, AI4ALL now offers programs at six universities around the country: University of California Berkeley, Boston University, Carnegie Mellon University, Princeton University, Simon Fraser University and Stanford.Participants receive a mix of technical training, hands-on learning, demos of real-world applications (such as a self-driving car), mentorship and exposure to experts in the field. This year, guest speakers included representatives from big tech firms including Tesla, Google and Microsoft, as well as startups including H20.ai, Mobileye and Argo AI.The universities provide three to five “AI for good” projects for students to work on during the program. Recent examples include developing algorithms to identify fake news, predict the infection path of the flu and map poverty in Uganda.For many participants, the AI4ALL summer program is only the beginning.”We talk about wanting to create future leaders in AI, not just future creators, that can really shape what the future of this technology can bring,” Posner says.AI4ALL recently piloted an AI fellowship program for summer program graduates to receive funding and mentorship to work on their own projects. One student’s project involved tracking wildfires on the West Coast, while another looked at how to optimize ambulance dispatches based on the severity of the call after her grandmother died because an ambulance didn’t reach her in time.Other graduates have gone on to create their own ventures after finishing the program, and AI4ALL provides “seed grants” to help them get started. Often, these ventures involve exposing other kids like themselves to AI. For example, three alumni started a workshop series called creAIte to teach middle school girls about AI and computer science using neural art, while another runs an after school workshop called Girls Explore Tech.Another graduate co-authored a paper on using AI to improve surgeons’ technique that won an award at NIPS’s Machine Learning for Health workshop in 2017.”We have a lot of industry partners who have seen our students’ projects and they go, ‘Wow. I can’t believe how amazing and rigorous and advanced this project is.’ And it kind of changes people’s minds about what talent looks like and who the face of AI really is,” Posner says.Last month, AI4ALL announced it will be expanding its reach in a big way: The organization received a $1 million grant from Google to create a free digital version of its curriculum, set to launch in early 2019.Related: Artificial Intelligence May Reflect the Unfair World We Live inChad Steelberg, co-founder and CEO of VeritoneSolution: Building the next generation of AISerial entrepreneur Chad Steelberg first got involved in AI during his high school years in the 1980s, when he worked on algorithms to predict the three-dimensional structures of proteins. At the time, he felt AI’s capabilities had reached a plateau, and he ended up starting multiple companies in different arenas, one of which he sold to Google in 2006.A few years later, Steelberg heard from some friends at Google that AI was about to take a huge leap forward — algorithms that could actually understand and make decisions, rather than simply compute data and spit back a result. Steelberg saw the potential, and he invested $10 million of his own money to found Veritone.Veritone’s aiWARE is an operating system for AI. Instead of communicating between the software and hardware in a computer, like a traditional operating system, it takes users’ queries — such as “transcribe this audio clip” — and finds the best algorithm available to process that query, whether that’s Google Cloud Speech-to-Text, Nuance or some other transcription engine. As of now, aiWARE can scan more than 200 models in 16 categories, from translation to facial recognition.Algorithms work best when they have a sufficiently narrow training set. For example, if you try to train one algorithm to play go, chess and checkers, it will fail at all three, Steelberg says. Veritone tells the companies it works with to create algorithms for very narrow use cases, such as images of faces in profile. AiWARE will find the right algorithm for the specific query, and can even trigger multiple algorithms for the same query. Steelberg says when an audio clip uses multiple languages, the translations aiWARE returns are 15 percent to 20 percent more accurate than the best single engine on the platform.Algorithms designed for parsing text and speech, such as transcription and translation, are another area prone to bias. One study found algorithms categorized written African American vernacular English as “not English” at high rates, while a Washington Post investigation found voice assistants such as Amazon’s Alexa have a hard time deciphering accented English.Though it wasn’t built to eliminate bias, aiWARE ends up doing exactly that, Steelberg says. Just like the human brain is capable of taking all of its learned information and picking the best response to each situation, aiWARE learns which model (or models) is most appropriate to use for each query.”We use our aiWARE to arbitrate and evaluate each of those models as to what they believe the right answer is, and then aiWARE is learning to choose which set of models to trust at every single point along the curve,” Steelberg says.It’s not an issue if an algorithm is biased. “What’s problematic is when you try to solve the problem with one big, monolithic model,” Steelberg says. AiWARE is learning to recognize which models are biased and how, and work around those biases.Another factor that results in biased AI is that many algorithms will ignore small subsets of a training set. If in a data set of 1 million entries, there are three that are different, you can still achieve a high degree of accuracy overall while performing horribly on certain queries. This is often the reason facial recognition software fails to recognize people of color: The training set contained mostly images of white faces.Veritone tells companies to break down training sets into micro models, and then aiWARE can interpolate to create similar examples.”You’re essentially inflating that population, and you can train models now on an inflated population that learn that process,” Steelberg says.Using small training sets, aiWARE can build models for facial recognition with accuracy in the high 90th percentile for whatever particular subcategory a client is interested in (e.g., all the employees at your company), he says.Steelberg says he believes an intelligent AI like aiWARE has a much better chance of eliminating bias than a human auditor. For one, humans will likely have a hard time identifying flawed training sets. They also might bring their own biases to the process.And for larger AI models, which might encompass “tens of millions of petabytes of data,” a human auditor is just impractical, Steelberg says. “The sheer size makes it inconceivable.”