A great article by Alison Powell: What to do about biased AI? Going beyond transparency of automated systems
Last week someone at work brought to my attention the Google Manifesto shenanigan over the previous weekend. As a woman in tech, I hear about things like this all the time, but I’m too caught up with other activities to respond to these things publicly. I try to be a positive person, so instead of dwelling on all the lame comments that peppered social media, I’m just going to focus on those who have rebutted the manifesto with much better articulation that I could ever have.
A Brief History of Women in Computing: What I love about this article is that it pointed out what I felt was the biggest problem in the Google Manfiesto. The manifesto presented several biological research and used it to try to justify why women could be less suited for computing. However, as this article points out, the jump was too big. The biological components pointed out may explain certain traits, but not how those traits exactly cause an interest (or lack thereof) in computing specifically. As it is, the manifesto (yes I read it) sounded like it was motivated by the author’s deeply held stereotypes about women and he tried to back up his beliefs retroactively. Additionally, the manifesto does not address how modern computing environments were shaped by men and optimized for their own behaviour. Because let’s face it: a profession’s environment affects its workers, while workers in turn affect the environment. The relationship is symbiotic. Several of the manifesto’s points pertain more to computing environments rather than the actual task. For example, it said that computing is a high-stress profession requiring less empathy and social interactivity. Is it possible that women, with their different biology, could thrive in a different, yet equally productive, computing environment? I don’t know, and I think it would be more productive to conduct research on it than to rely on stereotypes to make leaps in conclusion.
So, About this Googler’s Manifesto: What I like about this article is his explanation about how engineering isn’t an isolated endeavour. This was a misconception I had when I was younger, and it’s actually something that attracted me to the field, because I like doing solo work. I’ve grown out of that misconception though, and I love computer science enough to also appreciate its collaborative and social aspect.
Tech’s Damaging Myth of the Loner Genius Nerd: This article expands a little bit more on the misconception of engineering as a solo task. What’s even more important is that it points out one of the things that really annoy me in the Artificial Intelligence / Deep Learning sector today: engineers seem to be developing tools for things that I don’t think many people will use. Take for example, machines that beat other players in a very particular game. What is this doing for the world at large? For people who are not gamers? For people from low-income households or third-world countries? What is this doing for the environment, for our healths, for improving society in general? As a computer scientist, making a positive impact in the world is my life goal, and it can be puzzling to hear that advancements in my chosen concentration mostly serve such a tiny niche. Every week you hear about that new deep neural net that can now replace a writer or an artist, but how about helping marginalized creators reach the audience who want to read their work? When did voices of machines become more important than the voices of humans? Especially when you know that these machines have been trained on a very particular subset of work that are most likely mainstream already. This article explains a little bit more on why empathy might be the key to averting this trend.
Anyway, that’s all I have for now. Like I said, I don’t like to dwell on this kind of situations. If you have any article you’d like to share, let me know. Or if you have thoughts about computer science or AI and what you think they can do for society, let me know too!
I wasn’t able to read a lot this month, and most of the things I read were for school. For some reason, I was just really tired most of the time, and even during my morning commute to work, I just didn’t feel like reading. I think it mostly has to do with my reading slump after finishing Thick As Thieves by Megan Whalen Turner. I’m hoping to hop out of this slump this month.
In any case, here’s what I read for school. These books are for my technical entrepreneurship class.
The Lean Startup by Eric Ries
My professor claims that The Lean Startup was a real game-changer several years ago. I can tell it was, because a lot of the principles mentioned in this book are things that are being actively practiced in the industry, at least in the companies I’ve worked for. Things like A/B testing and MVPs that seem like very reasonable things were surprisingly not very ubiquitous some years ago.
The main thing I didn’t like about this book was how disorganized it was. I think there was an attempt to organize the book into sections, but it didn’t work, because a whole lot of the things mentioned in the first few chapters were incessantly repeated throughout the entire book. Not only were the concepts repeated across chapters, but the author has one of those high-school essayist syndrome where they try to repeat the thesis ten times in a paragraph. It just gets very redundant. Don’t get me wrong, the ideas are extremely helpful and important. I just did not like the way they were written.
Business Model Generation
The coolest thing about this book is its format. I ended up buying a physical copy, and I would recommend to anyone who wants to read this book to also buy a physical copy. Its strengths are really in how the message is conveyed. The design is spectacular, very sleek and almost magazine-like.
The most important parts of this book is in the first quarter. After you finish reading the different sections of a business model and the different types of business models in existence, the content gets a little uninteresting after that. The chapters on storytelling and visualization were pretty much common-sense. And you can tell that they’re common-sense, because there were basically three main ideas surrounding them that were repeated over and over again throughout the pages.
Well, here’s to hoping I’ll get more interesting things to read this month.
Someone at work brought this up during lunch, and I read it after my break. I’m more on the ML/Big Data side of things, but as an aspiring writer, I can very, very much attest to the frustrations of NLPers. I’m glad someone finally called out the academic trend of “over-selling” especially when it pertains to deep learning, which is a buzzword that receives a lot of hype these days.
I think the problem definitely starts in academia and the sense of competitiveness there, but I also wish that tech journalism was better. I remember reading this paper on using neural networks to separate the content and style of a piece of artwork; some articles that responded to this were so excited that they even deemed human artists obsolete. Or perhaps it wasn’t excitement so much as fear of the impending AI apocalypse. *sigh* I just wish for a more honest, more grounded coverage of what’s going on in the computer science community instead of the super-hyped up things we currently get both from the media and educational institutions.
In this post, I’m going to focus for a moment on my full-time job. I know that this blog is mostly filled with my hobbies and personal projects so it might seem like those are the only things I do. However, a good chunk of my life actually revolves around my career in tech.
I started my internship as a data scientist at the beginning of the month. It’s my 2nd full-time job as a computer scientist, and in some ways, I cannot help but compare it to my 1st full-time job. I worked as a front-end software engineer for over two years in a smaller company. Both companies are great, filled with talented people I get along with. More importantly, at both companies I am doing work that I am passionate about even though they are different.
And that’s what I want to focus on in this post: the difference between my experience as a front-end software engineer from a small start-up(-ish) company, and my impression so far as a data science in a much, much larger company.
I knew that the work would be different. And yet, I think I naively assumed that the job would be similar enough that I could measure my productivity in the same way. In my old job, I knew I was being productive when I managed to finish my assigned tickets. Depending on the tasks, I could finish about five moderate bugs in a day; for new features, I could at least get some new code out to be reviewed within a day or two at most.
In my new job, the process is entirely different. We’re working in Kanban style, rather than sprints. My tasks are a little more vague. Instead of having a specific goal I could measure, like changing the header background from white to grey, or adding a pop-out to a link, I’m assigned tasks like visualizing the clusters of similar items. As you can see, this task is less measurable. For one thing, the end goal isn’t to just have a nice visualization, right? Underneath that statement, I know that my goal is also to analyze the visualization, to obtain insights from the clustering. And this means that I have to find out a clustering algorithm that can actually give me a good visualization; it means that I have to find the data that can work with such an algorithm; it means that I have to find a visualization that can actually give me insights. And in the end, how do I know if the insights are meaningful or not?
For the past four weeks, I have struggled a little with this vagueness. Yes, I know I could ask, but I get the impression that it’s also part of the job of a data scientist to figure out these things. Whenever I’m assigned a task, it’s no longer up to a product manager to break down that task for me. It’s up to me to figure out what’s involved in that process.
I think this is the biggest difference between my previous job and the current one. As a junior software developer, my job was to implement whatever the product managers told me to. This is in contrast with a research position, where my job is to discover what must be implemented.
I worry that I’m not being as productive as I can be, and my worries are compounded with the fact that I don’t actually know how to measure my productivity as a data scientist. Which leads me to this passage from The Lean Startup by Eric Ries.
When I worked as a programmer, that meant eight straight hours of programming without interruption. That was a good day. In contrast, if I was interrupted with questions, process, or — heaven forbid — meetings, I felt bad. What did I really accomplish that day? Code and product features were tangible to me; I could see them, understand them, and show them off. Learning, by contrast, is frustratingly intangible.
Wow. This book is required reading for my Technical Entrepreneurship course that runs alongside the internship. I don’t have much of an entrepreneurial spirit in me, but when I read this, I thought, “Aha! This is why this book is required reading!” I never realized what it was that bothered me as I started my career in data science, until I found this passage in the book. I could have never put it in a better way.
Learning, by contrast, is frustratingly intangible.
I realized much of what I do in my new job as a research intern is learning. When you’re researching, what you’re doing is learning. You’re learning what works and what doesn’t. I was so used to measuring my productivity in terms of how much code I write or how many tasks I finished. Now I have to figure out a way to measure my productivity in terms of learning and the return value from what I learn.
Data scientist Cathy O’Neil comments on how algorithms — “under the guise of math, fairness, and objectivity — reinforce and magnify the old biases and power dynamics that we hoped they would eliminate.”
As a computer scientist, this is the kind of thing I’m concerned about. And very few people are sadly aware of it. It’s difficult for computers to solve problems that we, humans, don’t already know how to solve either.