Alaskas Capital City Braces for Potential Layoffs

first_imgOf the 16,000 State of Alaska employees, more than a quarter of them work in the capital city. On their lunch break, state employees at the State Office Building talked about their tentative employment future.Download Audio:Britten Burkhouse says her office at the Department of Health and Social Services was pretty quiet after getting the email from Gov. Bill Walker about potential state layoffs.“I think we were all just dealing with the punch. Recuperating maybe a little bit. It wasn’t a very good thing on a Monday.”Burkhouse isn’t surprised by the email. The threat of a government shutdown and layoffs has been a possibility since the legislature recessed at the end of April, but she says it makes the situation seem more desperate. She thinks Gov. Walker is doing the best he can, but,“It’s come to the point where maybe he’s using state employees as leverage to kind of get the Legislature to act.”Burkhouse is a grants administrator for the department. She says she makes sure nonprofits get money to provide services for Alaskans.“State employees do more than just show up to work every day. We actually help protect the life, health and safety of Alaskans.”Mike Lewis has been a state worker for 15 years. He’s the lead courier in mail services. Over the years, he’s made sure Alaskans get their Permanent Fund Dividend checks. He says the potential layoffs are all part of a game.“This is what they do. It’s government. It’s politics. I don’t like politics because of this.”And he doesn’t think there’s anything he can do, like contacting a legislator, to change the situation.“It’s the big people up there that make all the decisions. I don’t think they care much about the little guys.”If he’s laid off,“I’ll go fishing, crabbing – all the things I can do when I’m off. If it’s only a week, it wouldn’t bother me that much, but if it’s longer than that it’s the financial thing.”Twenty-three-year-old Mackenzie Merrill just wants to have job stability. Before this email, she says she was getting other ones about positions getting cut. She’s only an economist with the Department of Revenue for only eight months. It’s her first job out of college.“I just signed a year-long lease and I want to work here and I want to save money for my future. I went to college. This is what I signed up for. Entering the state during a severe fiscal uncertainty has been disappointing.”Merrill has a vacation planned in July anyway, when layoffs could begin. But she’d like to know that she has a job to come back to.last_img read more

These Entrepreneurs Are Taking on Bias in Artificial Intelligence

first_img Growing a business sometimes requires thinking outside the box. Free Webinar | Sept. 9: The Entrepreneur’s Playbook for Going Global Register Now » 15+ min readcenter_img September 5, 2018 Whether it’s a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you’ve used artificial intelligence in your everyday life.”Today 85 percent of Americans use AI every day,” says Tess Posner, CEO of AI4ALL.AI has also been touted as the new must-have for business, for everything from customer service to marketing to IT. However, for all its usefulness, AI also has a dark side. In many cases, the algorithms are biased.Related: What Is AI, Anyway? Know Your Stuff With This Go-To Guide.Some of the examples of bias are blatant, such as Google’s facial recognition tool tagging black faces as gorillas or an algorithm used by law enforcement to predict recidivism disproportionately flagging people of color. Others are more subtle. When Beauty.AI held an online contest judged by an algorithm, the vast majority of “winners” were light-skinned. Search Google for images of “unprofessional hair” and the results you see will mostly be pictures of black women (even searching for “man” or “woman” brings back images of mostly white individuals).While more light has been shined on the problem recently, some feel it’s not an issue addressed enough in the broader tech community, let alone in research at universities or the government and law enforcement agencies that implement AI.”Fundamentally, bias, if not addressed, becomes the Achilles’ heel that eventually kills artificial intelligence,” says Chad Steelberg, CEO of Veritone. “You can’t have machines where their perception and recommendation of the world is skewed in a way that makes its decision process a non-sequitur from action. From just a basic economic perspective and a belief that you want AI to be a powerful component to the future, you have to solve this problem.”As artificial intelligence becomes ever more pervasive in our everyday lives, there is now a small but growing community of entrepreneurs, data scientists and researchers working to tackle the issue of bias in AI. I spoke to a few of them to learn more about the ongoing challenges and possible solutions.Cathy O’Neil, founder of O’Neil Risk Consulting & Algorithmic AuditingSolution: Algorithm auditingBack in the early 2010s, Cathy O’Neil was working as a data scientist in advertising technology, building algorithms that determined what ads users saw as they surfed the web. The inputs for the algorithms included innocuous-seeming information like what search terms someone used or what kind of computer they owned.However, O’Neil came to realize that she was actually creating demographic profiles of users. Although gender and race were not explicit inputs, O’Neil’s algorithms were discriminating against users of certain backgrounds, based on the other cues.As O’Neil began talking to colleagues in other industries, she found this to be fairly standard practice. These biased algorithms weren’t just deciding what ads a user saw, but arguably more consequential decisions, such as who got hired or whether someone would be approved for a credit card. (These observations have since been studied and confirmed by O’Neil and others.)What’s more, in some industries — for example, housing — if a human were to make decisions based on the specific set of criteria, it likely would be illegal due to anti-discrimination laws. But, because an algorithm was deciding, and gender and race were not explicitly the factors, it was assumed the decision was impartial.”I had left the finance [world] because I wanted to do better than take advantage of a system just because I could,” O’Neil says. “I’d entered data science thinking that it was less like that. I realized it was just taking advantage in a similar way to the way finance had been doing it. Yet, people were still thinking that everything was great back in 2012. That they were making the world a better place.”O’Neil walked away from her adtech job. She wrote a book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, about the perils of letting algorithms run the world, and started consulting.Eventually, she settled on a niche: auditing algorithms.”I have to admit that it wasn’t until maybe 2014 or 2015 that I realized this is also a business opportunity,” O’Neil says.Right before the election in 2016, that realization led her to found O’Neil Risk Consulting & Algorithmic Auditing (ORCAA).”I started it because I realized that even if people wanted to stop that unfair or discriminatory practices then they wouldn’t actually know how to do it,” O’Neil says. “I didn’t actually know. I didn’t have good advice to give them.” But, she wanted to figure it out.So, what does it mean to audit an algorithm?”The most high-level answer to that is it means to broaden our definition of what it means for an algorithm to work,” O’Neil says.Often, companies will say an algorithm is working if it’s accurate, effective or increasing profits, but for O’Neil, that shouldn’t be enough.”So, when I say I want to audit your algorithm, it means I want to delve into what it is doing to all the stakeholders in the system in which you work, in the context in which you work,” O’Neil says. “And the stakeholders aren’t just the company building it, aren’t just for the company deploying it. It includes the target for the algorithm, so the people that are being assessed. It might even include their children. I want to think bigger. I want to think more about externalities, unforeseen consequences. I want to think more about the future.”For example, Facebook’s News Feed algorithm is very good at encouraging engagement and keeping users on its site. However, there’s also evidence it reinforces users’ beliefs, rather than promoting dialog, and has contributed to ethnic cleansing. While that may not be evidence of bias, it’s certainly not a net positive.Right now, ORCAA’s clients are companies that ask for their algorithms to be audited because they want a third party — such as an investor, client or the general public — to trust it. For example, O’Neil has audited an internal Siemens project and New York-based Rentlogic’s landlord rating system algorithm. These types of clients are generally already on the right track and simply want a third-party stamp of approval.However, O’Neil’s dream clients would be those who don’t necessarily want her there.”I’m going to be working with them because some amount of pressure, whether it’s regulatory or litigation or some public relations pressure kind of forces their hand and they invite me in,” O’Neil says.Most tech companies pursue profit above all else, O’Neil says, and won’t seriously address the issue of bias unless there are consequences. She feels that existing anti-discrimination protections need to be enforced in the age of AI.”The regulators don’t know how to do this stuff,” O’Neil says. “I would like to give them tools. But, I have to build them first. … We basically built a bunch of algorithms assuming they work perfectly, and now it’s time to start building tools to test whether they’re working at all.”Related: Artificial Intelligence Is Likely to Make a Career in Finance, Medicine or Law a Lot Less LucrativeFrida Polli, co-founder and CEO of PymetricsSolution: Open source AI auditingMany thought artificial intelligence would solve the problem of bias in hiring, by making sure human evaluators weren’t prejudging candidates based on the name they saw on a resume or the applicant’s appearance. However, some argue hiring algorithms end up perpetuating the biases of their creators.Pymetrics is one company that develops algorithms to help clients fill job openings based on the traits of high-performing existing employees. It believes it’s found a solution to the bias problem in an in-house auditing tool, and now it’s sharing the tool with the world.Co-founder and CEO Frida Polli stresses that fighting bias was actually a secondary goal for Pymetrics.”We’re not a diversity-first platform,” Polli says. “We are a predictive analytics platform.”However, after seeing that many of her clients’ employee examples used to train Pymetrics’s algorithms were not diverse, combating bias became important.”Either you do that or you’re actually perpetuating bias,” Polli says. “So, we decided we certainly were not going to perpetuate bias.”Early on, the company developed Audit AI to make sure its algorithms were as neutral as possible when it came to factors including gender and race. If a company looking to fill a sales role had a sales team that was predominantly white and male, an unaudited algorithm might pick a candidate with those same traits. Polli was quick to point out that Audit AI would also recommend adjustments if an algorithm was weighted in favor of women or people of color.Some critics say if you tweak a hiring algorithm to remove bias you’re lowering the bar, but Polli disagrees.”It’s the age-old criticism that’s like, ‘oh well, you’re not getting the best candidate,'” Polli says. “‘You’re just getting the most diverse candidate, because now you’ve lowered how well your algorithm is working.’ What’s really awesome is that we don’t see that. We have not seen this tradeoff at all.”In May, Pymetrics published the code for its internal Audit AI auditing tool on Github. Polli says the first goal for making Audit AI open source is to encourage others to develop auditing techniques for their algorithms.”If they can learn something from the way that we’re doing it that’s great. Obviously there are many ways to do it but we’re not saying ours is the only way or the best way.”Other motivations include simply starting a conversation about the issue and potentially learning from other developers who may be able to improve Audit AI.”We certainly don’t believe in sort of proprietary debiasing because that would sort of defeat the purpose,” Polli says.”The industry just needs to be more comfortable in actually realizing that if you’re not checking your machine learning algorithms and you’re saying, ‘I don’t know whether they cause bias,’ I just don’t think that that should be acceptable,” she says. “Because it’s like the ostrich in the sand approach.”Related: The Scariest Thing About AI Is the Competitive Disadvantage of Being Slow to AdaptRediet Abebe, co-founder of Black in AI and Mechanism Design for Social GoodSolution: Promoting diverse AI programmers and researchers Use of facial recognition has grown dramatically in recent years — whether it’s for unlocking your phone, expediting identification at the airport or scanning faces in a crowd to find potential criminals. But, it’s also prone to bias.MIT Media Lab researcher Joy Buolamwini and Timnit Gehru, who received her PhD from the Stanford Artificial Intelligence Laboratory, found that facial recognition tools from IBM, Microsoft and Face++ accurately identified the gender of white men almost 100 percent of the time, but failed to identify darker skinned women in 20 percent to 34 percent of cases. That could be because the training sets themselves were biased: The two also found that the images used to train one of the facial recognition tools were 77 percent male and more than 83 percent white.One reason machine learning algorithms end up being biased is that they reflect the biases — whether conscious or unconscious — of the developers who built them. The tech industry as a whole is predominantly white and male, and one study by TechEmergence found women make up only 18 percent of C-level roles at AI and machine learning companies.Some in the industry are trying to change that.In March 2017, a small group of computer science researchers started a community called Black in AI because of an “alarming absence of black researchers,” says co-founder Rediet Abebe, a PhD candidate in computer science at Cornell University. (Gehru is also a co-founder.)”In the conferences that I normally attend there’s often no black people. I’d be the only black person,” Abebe says. “We realized that this was potentially a problem, especially since AI technologies are impacting our day-to-day lives and they’re involved in decision-making and a lot of different domains,” including criminal justice, hiring, housing applications and even what ads you see online.”All these things are now being increasingly impacted by AI technologies, and when you have a group of people that maybe have similar backgrounds or correlated experiences, that might impact the kinds of problems that you might work on and the kind of products that you put out there,” Abebe says. “We felt that the lack of black people in AI was potentially detrimental to how AI technologies might impact black people’s lives.”Adebe feels particularly passionate about including more African women in AI; growing up in Ethiopia, a career in the sciences didn’t seem like a possibility, unless she went into medicine. Her own research focuses on how certain communities are underserved or understudied when it comes to studying societal issues — for example, there is a lack of accurate data on HIV/AIDS deaths in developing countries — and how AI can be used to address those discrepancies. Adebe is also the co-founder and co-organizer of Mechanism Design for Social Good, an interdisciplinary initiative that shares research on AI’s use in confronting similar societal challenges through workshops and meetings.Initially, Abebe thought Black in AI would be able to rent a van to fit all the people in the group, but Black in AI’s Facebook group and email list has swollen to more than 800 people, from all over the world. While the majority of members are students or researchers, the group also includes entrepreneurs and engineers.Black in AI’s biggest initiative to date was a workshop at the Conference on Neural Information Processing Systems (NIPS) in December 2017 that garnered about 200 attendees. Thanks to partners such as Facebook, Google and ElementAI, the group was able to give out over $150,000 in travel grants to attendees.Abebe says a highlight of the workshop was a keynote talk by Haben Girma, the first deaf/blind graduate from Harvard Law School, which got Abebe thinking about other types of diversity and intersectionality.Black in AI is currently planning its second NIPS workshop.As part of the more informal discussions happening in the group’s forums and Facebook group, members have applied and been accepted to Cornell’s graduate programs, research collaborations have started and industry allies have stepped forward to ask how they can help. Black in AI hopes to set up a mentoring program for members.Related: Why Are Some Bots Racist? Look at the Humans Who Taught Them.Tess Posner, CEO of AI4ALLSolution: Introducing AI to diverse high schoolersThe nonprofit AI4ALL is targeting the next generation of AI whiz kids. Through summer programs at prestigious universities, AI4ALL exposes girls, low-income students, racial minorities and those from diverse geographic backgrounds to the possibilities of AI.”It’s becoming ubiquitous and invisible,” says Tess Posner, who joined AI4ALL as founding CEO in 2017. “Yet, right now it’s being developed by a homogenous group of technologists mostly. This is leading to negative impacts like race and gender bias getting incorporated into AI and machine learning systems. The lack of diversity is really a root cause for this.”She adds, “The other piece of it is we believe that this technology has such exciting potential to be addressed to solving some key issues or key problems facing the world today, for example in health care or in environmental issues, in education. And it has incredibly positive potential for good.”Started as a pilot at Stanford University in 2015 as a summer camp for girls, AI4ALL now offers programs at six universities around the country: University of California Berkeley, Boston University, Carnegie Mellon University, Princeton University, Simon Fraser University and Stanford.Participants receive a mix of technical training, hands-on learning, demos of real-world applications (such as a self-driving car), mentorship and exposure to experts in the field. This year, guest speakers included representatives from big tech firms including Tesla, Google and Microsoft, as well as startups including H20.ai, Mobileye and Argo AI.The universities provide three to five “AI for good” projects for students to work on during the program. Recent examples include developing algorithms to identify fake news, predict the infection path of the flu and map poverty in Uganda.For many participants, the AI4ALL summer program is only the beginning.”We talk about wanting to create future leaders in AI, not just future creators, that can really shape what the future of this technology can bring,” Posner says.AI4ALL recently piloted an AI fellowship program for summer program graduates to receive funding and mentorship to work on their own projects. One student’s project involved tracking wildfires on the West Coast, while another looked at how to optimize ambulance dispatches based on the severity of the call after her grandmother died because an ambulance didn’t reach her in time.Other graduates have gone on to create their own ventures after finishing the program, and AI4ALL provides “seed grants” to help them get started. Often, these ventures involve exposing other kids like themselves to AI. For example, three alumni started a workshop series called creAIte to teach middle school girls about AI and computer science using neural art, while another runs an after school workshop called Girls Explore Tech.Another graduate co-authored a paper on using AI to improve surgeons’ technique that won an award at NIPS’s Machine Learning for Health workshop in 2017.”We have a lot of industry partners who have seen our students’ projects and they go, ‘Wow. I can’t believe how amazing and rigorous and advanced this project is.’ And it kind of changes people’s minds about what talent looks like and who the face of AI really is,” Posner says.Last month, AI4ALL announced it will be expanding its reach in a big way: The organization received a $1 million grant from Google to create a free digital version of its curriculum, set to launch in early 2019.Related: Artificial Intelligence May Reflect the Unfair World We Live inChad Steelberg, co-founder and CEO of VeritoneSolution: Building the next generation of AISerial entrepreneur Chad Steelberg first got involved in AI during his high school years in the 1980s, when he worked on algorithms to predict the three-dimensional structures of proteins. At the time, he felt AI’s capabilities had reached a plateau, and he ended up starting multiple companies in different arenas, one of which he sold to Google in 2006.A few years later, Steelberg heard from some friends at Google that AI was about to take a huge leap forward — algorithms that could actually understand and make decisions, rather than simply compute data and spit back a result. Steelberg saw the potential, and he invested $10 million of his own money to found Veritone.Veritone’s aiWARE is an operating system for AI. Instead of communicating between the software and hardware in a computer, like a traditional operating system, it takes users’ queries — such as “transcribe this audio clip” — and finds the best algorithm available to process that query, whether that’s Google Cloud Speech-to-Text, Nuance or some other transcription engine. As of now, aiWARE can scan more than 200 models in 16 categories, from translation to facial recognition.Algorithms work best when they have a sufficiently narrow training set. For example, if you try to train one algorithm to play go, chess and checkers, it will fail at all three, Steelberg says. Veritone tells the companies it works with to create algorithms for very narrow use cases, such as images of faces in profile. AiWARE will find the right algorithm for the specific query, and can even trigger multiple algorithms for the same query. Steelberg says when an audio clip uses multiple languages, the translations aiWARE returns are 15 percent to 20 percent more accurate than the best single engine on the platform.Algorithms designed for parsing text and speech, such as transcription and translation, are another area prone to bias. One study found algorithms categorized written African American vernacular English as “not English” at high rates, while a Washington Post investigation found voice assistants such as Amazon’s Alexa have a hard time deciphering accented English.Though it wasn’t built to eliminate bias, aiWARE ends up doing exactly that, Steelberg says. Just like the human brain is capable of taking all of its learned information and picking the best response to each situation, aiWARE learns which model (or models) is most appropriate to use for each query.”We use our aiWARE to arbitrate and evaluate each of those models as to what they believe the right answer is, and then aiWARE is learning to choose which set of models to trust at every single point along the curve,” Steelberg says.It’s not an issue if an algorithm is biased. “What’s problematic is when you try to solve the problem with one big, monolithic model,” Steelberg says. AiWARE is learning to recognize which models are biased and how, and work around those biases.Another factor that results in biased AI is that many algorithms will ignore small subsets of a training set. If in a data set of 1 million entries, there are three that are different, you can still achieve a high degree of accuracy overall while performing horribly on certain queries. This is often the reason facial recognition software fails to recognize people of color: The training set contained mostly images of white faces.Veritone tells companies to break down training sets into micro models, and then aiWARE can interpolate to create similar examples.”You’re essentially inflating that population, and you can train models now on an inflated population that learn that process,” Steelberg says.Using small training sets, aiWARE can build models for facial recognition with accuracy in the high 90th percentile for whatever particular subcategory a client is interested in (e.g., all the employees at your company), he says.Steelberg says he believes an intelligent AI like aiWARE has a much better chance of eliminating bias than a human auditor. For one, humans will likely have a hard time identifying flawed training sets. They also might bring their own biases to the process.And for larger AI models, which might encompass “tens of millions of petabytes of data,” a human auditor is just impractical, Steelberg says. “The sheer size makes it inconceivable.”last_img read more

Should you go with Arduino Uno or Raspberry Pi 3 for your

first_imgArduino Uno and Raspberry Pi 3 are the go-to options for IoT projects. They’re tiny computers that can make a big impact in how we connect devices to each other, and to the internet. But they can also be a lot of fun too – at their best, they do both. For example, Arduino Uno and Raspberry Pi were used to make a custom underwater camera solution for filming the Netflix documentary, Chasing Coral. They were also behind the Autonomous racing robot. However, how are the two microcomputers different? If you’re confused about which one you should start using, here’s a look at the key features of both the Arduino Uno and the Raspberry Pi 3.This will give you a clearer view on what fits your project well, or maybe just help you decide what to include on your birthday wishlist. Comparing the Arduino Uno and Raspberry Pi 3 Raspberry Pi 3 has a Broadcom BCM2837 SoC with it can handle multiple tasks at one time. It is a Single Board Computer (SBC), which means it is a fully functional computer with a dedicated processor, memory, and is capable of running an OS – Raspberry Pi 3 runs on Linux. It can run multiple programs as it has its own USB ports, audio outputs, a graphic driver for HDMI output. Arduino Uno is a microcontroller board based on the ATmega328, an 8-bit microcontroller with 32KB of Flash memory and 2KB of RAM, which is not as powerful as SBCs. However, they are a great choice for quick setups. Microcontrollers are a good pick when controlling small devices  such as LEDs, motors, several different types of sensors, but cannot run a full operating system. The Arduino Uno runs one program at a time. One can also install other operating systems such as Android, Windows 10, or Firefox OS. Let’s look at the features and how one stands out better than the other: Speed The Raspberry Pi 3 (1.2 GHz) is much faster than Arduino (16 MHz). This means it can complete day-to-day tasks such as web surfing, playing videos, with greater ease From this perspective, Raspberry Pi is the go-to choice for media centered applications. Winner: Raspberry Pi 3 Easy time interface Arduino Uno offers a simplified approach for project building. It has easy time interfacing with presence of analog sensors, motor, and other components. By contrast, the Raspberry Pi 3  has a more complicated route if you want to set up projects. For example, to take sensor readings you’ll need to install libraries and connect to a monitor, keyboard and mouse. Winner: Arduino Uno Bluetooth/ Internet connectivity Raspberry Pi 3 connects to Bluetooth devices and the internet directly using Ethernet or by connecting to Wi-Fi. The Arduino Uno can do that only with the help of a Shield that adds internet or Bluetooth connectivity. HATS (Hardware Attached on Top) and Shields can be used on both devices to give them additional functionality. For example. HATs are used on the Raspberry Pi 3, to control an RBG Matrix, add a touchscreen, or even create an arcade system. Shields that can be used on the Arduino Uno include a Relay Shield, a Touchscreen Shield, or a Bluetooth Shield. There are hundreds of Shields and HATs that provide the functionality that you regularly use. Winner: Raspberry Pi 3 Supporting ports The Raspberry Pi 3 has an HDMI port, audio port, 4 USB ports, camera port, and LCD port, which is ideal for media applications. On the other hand, Arduino Uno does not have any of these ports in the board. However, some of these ports can be added on the Arduino Uno with the help of Shields. Arduino Uno has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz crystal oscillator, a USB connection, a power jack, an ICSP header, and a reset button. Winner: Raspberry Pi 3 Other features Set-up time Raspberry Pi 3 takes longer to set up. You’ll also probably need additional components such as a HDMI cable, a monitor, a cable, and a keyboard and mouse. For the Arduino Uno you simply have to plug it in. The code then runs immediately. Winner: Arduino Uno Affordable Price Arduino Uno is much cheaper. It’s around $20 compared to Raspberry Pi 3, which is around $35. It’s important to note that this excludes the cost of cables, keyboards, mouse and other additional hardware.As mentioned above, you don’t need those extras with the Arduino Uno. Winner: Arduino Uno Both Arduino Uno and Raspberry Pi 3 are great in their individual offerings. Arduino Uno would be an ideal board if you want to get started with electronics, and begin building fun and engaging hands-on projects. It’s great for learning the basics of how sensors and actuators work, and an essential tool for one’s rapid prototyping needs. On the other hand, Raspberry Pi 3 is great for projects that need an online connection and have multiple operations running  at the same time. Pick as per your need! You can also check some of our exciting books on Arduino Uno and Raspberry Pi. Raspberry Pi 3 Home Automation Projects: Bringing your home to life using Raspberry Pi 3, Arduino, and ESP8266 Build Supercomputers with Raspberry Pi 3 Internet of Things with Arduino Cookbook Read Next How to build a sensor application to measure Ambient Light5 reasons to choose AWS IoT Core for your next IoT projectBuild your first Raspberry Pi projectlast_img read more