Why STEM needs more women

 

Women in STEM, courtesy of cnn.com
Women in STEM, courtesy of cnn.com

Before 1993, most women that visited ER rooms in America were misdiagnosed with various illnesses, many of them would  later be revealed to have had heart disease . A harrowing number of these women were effectively sent to their deaths because of  scientific tests that were essentially devoid of any insights on the accuracy of the tests on women. Until 1993 the pervading belief was that women exhibited the same symptoms as men for cardiovascular disease, it was later found out that this was not necessarily true. This new revelation in science caused standard testing methods for cardiovascular disease to be discontinued, by then of course thousands of women had suffered adverse effects to misprescribed drugs that were created on clinical trials focused on the average sized man. This kind of oversight was motivated by the simplistic idea that women are “emotional” or in slightly more scientific terms, that they have physiological imbalances that make their test results unreliable. Surprisingly, this procedural bias continued in the US drug industry from 1850 to 1992 before health regulatory bodies mandated the inclusion of women in clinical trials. It is now known that women exhibit different symptoms to heart disease and perhaps consequently, that more women die of heart attacks than men. Although macabre, the story reveals a cross cutting homogeneity within the scientific enterprise that provokes us into wondering about what else we continue to miss daily because of gender specific valuations. We can only hope that our ignorance is not nearly as fatal as a heart attack.

So far, the story of women in STEM (Science, Technology, Engineering, Mathematics) has been one of inclusion, and one that is often overdressed in romanticized gender parity rhetoric. Statements such as “women are equal” and “women as just as capable as men” are often meant to imply that women are just like men, or I imagine that is what most men hear. This could be because men are unfortunately the status quo and sometimes when you are the status quo, it is hard to realize that there is another “quo”. The argument of equality for women in STEM in the formative years was poorly articulated and not well understood. This is true for equality for women in general, perhaps this is why, when faced with some ideas of feminism men are quick to retort that women should do their own heavy lifting or hold their own in a bare knuckle fist fight with a man. Of course the argument is more correctly framed today as an issue of equal rights for women. In the hope of building a more just and equitable society, most of us wish for the equal inclusion of women in both the knowledge and monetary economy because we believe that we have to be fair but I’m afraid this sentiment alone is not enough and in fact undermines the real contribution of women in any economy.

rosieforstory_0I am sure by now you have guessed that the author of this article is a man, one that until recently often saw it as a duty to the see the equal representation of women as a way to promote just order in the world. What I didn’t know about is the genius of decentralized design and comparative advantage. The truth is men and women are fundamentally different but most of us are guilt-tripped into ignoring why this is a good thing. There is a thing, a keen perspective, call it “gender innovation”- that only women can offer because they are women. This perspective is the billions of female minds thinking and dreaming up inventions to world problems, inventions we will never get to see because they are actively being repressed and downplayed by the dominant male bias. The male bias is an anachronistic bastion that maintains that only male ideas or ideas that solve men’s problems are worth pursuing because civilizations were built predominantly on the achievements of men, this idea is counter-productive to the say the least.

I often liken our predicament to the benefits brought about by extra-terrestrial inventions or technologies meant for space. Autonomous vehicles for instance where envisaged for space and deep-sea exploration because there is no place more foreign or hazardous to us. These technologies have found earthly/terrestrial applications. In order to solve a problem, woman centric or otherwise and because of the obvious motivation, a woman will invent something that can find far reaching applications, widening our inventory of inventions. This is the key to sustainable design and ostensibly the reason nature breeds diversity, because it makes the biosphere more adaptive to an ever changing environment. This is what we’re missing out on, a plethora of woman-built inventions that make us more adaptable to change in political, health, technological, economic and educational ecosystems. This is why the title of this article reads the why it does, “STEM needs more women”. Science and technology actually need the inventions of women, from a purely scientific, economical, functional and unadulterated point of view you cannot not have a sustainable and growing enterprise in science if you willfully crowd out the contributions of other scientist based on their nationality, ethnicity or gender. If you do this then science becomes esoteric and secretive like religious sects crippled by Aristotelian ideas of a universe with a “special” earth at its center. This should be the dominating argument, plain and bare, objectively presented with compelling numbers that show that less than half of a world’s population worth of intellectual raw material is being wasted. It is the thousands of jobs technology is projected to create with only half a work force to fill these jobs. Gender equality rhetoric in isolation is nothing but well-meant platitudes that mythologise the benefits of real equality.

In our experience teaching computer science thinking and programming to primary age girls, we found that the greatest challenge is convincing the girls that computer science is not a “boys thing”. There is a noticeable lack of a good interpretation of the STEM curriculum that makes it hard for girls to imagine themselves as thriving scientists and engineers. The curriculum is often presented in a skewed way, especially at primary ages it suggests that boys are more suitable for STEM jobs because they have an early experience playing games and toys related to STEM jobs. Scientific jobs are not easy but primary education should not scare girls from choosing STEM careers, same there should be no illusions about how much societal forces will try to discourage them, they should be made aware of the male bias. Women are early adopters of technology, to encourage our Computer Science girls class we often lead with explaining that the worlds first computer programmer was a woman, lady Ada Lovelace. To drive this point home we make references to the 1940s when most men worked hardware engineering jobs while women “manned” office desk jobs working with software and becoming the world’s first software engineering work force, in fact the term “software engineering” was coined by Margaret Hamilton, a woman. Obscure stories about women’s contributions to computer science are bountiful, read this report on the ENIAC six, or this one about the history of programming. . When teaching the girls, you have to situate them in a historical and present day reality all the while checking your own vantage point and bias.The best way to encourage participation of girls in STEM is to give them role models, increasing the prominence of women in STEM galvanises their enthusiasm to pursue STEM careers. Of course dispelling the myths of the male stereotype gets harder when there are daily reminders that there are those who think women have no place in computer science. The gamer gate controversy is an example of how misogynistic sentiment can sometimes scare women out of the tech industry.

Mae Jemison image, the first African-American woman in space has a background in engineering and medical research
Mae Jemison image, the first African-American woman in space has a background in engineering and medical
research

Debunking the male bias doesn’t happen without debunking cultural and racial constructs.It is undoubtably harder for black women to pursue careers in STEM, even with inclusion programs the overall number of black women in STEM fields has remained alarmingly low. Cultural generalisations that commit women to other professions are a greater challenge to black women. The admission of black women in academia is often just part of meeting a diversity quota, this makes for a good splash of color on the university personnel page. Academic brilliance of women from minority groups is even less acknowledged in academia, so much so that there is an invention of the word “Minority Academia Ghetto”, a place where the non-functional, non-essential minority staff of universities are relegated to. It is evident that minority groups have been kept out of post doctoral and higher management positions, this further marginalizes black women and causes them to suffer from depression and impostor syndrome. Clearly we have a long way to go before women feel completely welcome in STEM but good progress is being made all around, the world is slowly realizing that women will offer an incalculable contribution to science if they are allowed to participate on an even playing field. A more productive world of science would have us see a shift from an emphasis on the gender meritocracy that relies on ideas of victimhood and pity praise, to a realization that excluding women is slowing down innovation that would otherwise make us a more advanced society.

Check out the following resources if you want to get into STEM

 

Moore’s Law no longer our performance oracle

Integrated Circuit, photo courtesy of http://wonderfulengineering.com

With the debut of technology theories like the technological singularity and the realization of “the internet of things” on the horizon, there has been clamorous panic among technocrats as they debate whether we can continue to accurately predict or control technological advancement. The optic we have used to predict computational power for the last fifty years or so has been Moore’s Law. Without getting into the highly intellectualized rigmarole of digital electronics, Moore’s law reads like this, “the number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years” but is interpreted to read like this, ” the number of transistors that can be placed on an integrated circuit doubles approximately every two years increasing computational power or performance exponentially without diminishing returns”.

How did we get here? a simple thought experiment called the Sand Heap Paradox can be used to put things in perspective. We have a heap of sand and we continuously remove one grain from it. The change in the size of the heap is nominal, so much so that we fail to realize that it is reducing in size, although very slow and on a miniscule scale. Fast forward a few years and there is only a single grain of sand left and no heap. Think of the end of Moore’s law as the moment we realize that there isn’t an infinite amount of sand available and that all predictions have their limits. Sand of course is almost poetic in our case since silica is used to make silicon which is a key ingredient found in every microprocessor transistor.

Chart-III-8-Moores-Law-Over-199-Years-And-Going-Strong

This is where we find ourselves. The number of transistors you can cram into a chip can’t increase forever because of the physical limitations of silicon based chips. Some research is suggesting that this was already the case at 28nm(nanometer) but microprocessor giant Intel reported a 14nm achievement in 2014. The biggest hurdle to keep shrinking transistors to tiny atomic sizes is heat and leakage. At 5nm the laws of physics turn the chip into a frying pan and quantum mechanics at that size scrambles the atom and disrupts information flow (ability for signals to travel through a logic gate on a silicon wafer in a coordinated fashion). So Moore’s law falls short at postulating leaps in computational power primarily because the axiom is untenable at a certain size and that limit is fast approaching. Cutting edge research is instead looking at quantum and molecular computing to foster in the new paradigm for processing power with post silicon transistors. In this TED talk Ray Kurzweil gives the silicon based transistors another 10 years before we reach the performance apex. I need to mention that Kurweil has an impeccable history of predicting trends in technology. Renowned futurist Michio Kaku also echoes Kurzweil’s sentiments. The more closely we examine Moore’s law or its inaccurate interpretation the more it appears that it is a rule of “dumb” or self-fulfilling prophesy that merely coincided with Intel’s success in the microprocessor industry, Moore’s law for any scientific purposes is already dead and is only used purely for marketing purposes. So really the question is not whether Moore’s law is still valid, but for how long it will be be the conceptual framework we use to fuel our postulations of computational processing, pundits say 10 years but add on some reverse engineering with 3D transistor arrangement and we have roughly fifty years more.


mooreslaw_660In conclusion the debate on Moore’s law can be polarized into two camps, those that think computational power on silicon based transistors will keep increasing forever under the Moore paradigm and those that think the days of increasing computational power using silicon based transistors are numbered. Now you’re probably wondering whether all of this matters to you as a consumer, the answer is it probably doesn’t but the next paradigm which we think of to conceptualize computational performance leaps will probably give rise to greater computational power. When we move from Moore’s law and believe me we will, this will punctuate a transformation of our technological civilization. Think positronic brains and human like interactions with virtual personas. The silver lining on the dark cloud of Moore’s law might be as Ray Kurzweil puts it, that

“the dwindling of any paradigm is that it creates research pressure to come up with another paradigm that improves on and supplants the previous paradigm”.

Moshe Y. Vardi who wrote an article (Is Moore’s Party Over?) also seems to agree, adding that the death of Moore’s law will plunge us into a time when we will have to become creative with algorithms and systems in order to leverage the stagnation. Exponential growth of computing power under Moore’s law will definitely slow, perhaps to continue under molecular computing or some other far out concept.That is it for now, time to retire Moore’s law to the same place we put Ptolemaic planetary theories.

You can read Intel co-founder Gordon Moore’s original paper here

Wikipedia and Indigenous Knowledge Systems

You must expect that from time to time this blog will concern itself with research matters around information systems and issues about their appropriation or adoption in indigenous communities. This is because part of our social development agenda is to create tools that aid indigenous communities. That being said I would like to, albeit at a very high level, deconstruct the implications of participatory computing systems like Wikipedia and the role they play in empowering Namibian communities or the communities of other countries like it. This article is a preamble to a more comprehensive report that I’m working on during the course of the year.

Once A Nomad

With the recent advancements in ICT4D, Namibia has seen many of its indigenous communities receive huge investments
in telecommunication infrastructure. There are many reports that document the progress of this endeavor and I suspect they form part of a greater discourse about the proverbial “bridging the digital divide”. My concern however is not whether rural schools are getting educational necessities like internet but rather the socio-technical issues that come with introducing “foreign technologies” into indigenous communities.

I recently got dragged into the maelstrom of Wikipedia and what it means for indigenous knowledge systems. I’m going to ignore any academic citation red tape right now and tell you that indigenous knowledge is popularly defined as “knowledge acquired by people who have had a long rapport with their environment”. The Himba of Namibia for instance would typically qualify as possessors of indigenous knowledge since their livelihood over the years has relied greatly on knowledge they acquired from living in Southern African environments for a long time.

Jimmy Wales, one of the founders of Wikipedia has described Wikipedia’s grand vision as “creating a world where every person on the planet is given free access to the sum of all human knowledge”. I’d like to point out that “sum of all human knowledge” is really where it gets tricky. Currently the regulations that control the commission and omission of information into Wikipedia are laden with what we call a systemic bias. This systemic bias is preventing us from aggregating the sum of human knowledge because generally the curators running the show stem from western origins bringing with them western paradigms. This is being promulgated by a few counter intuitive rules they essentially say everyone is allowed to say their say as long as they do it in erudite English.  One rule (notability) for instance requires that any information contributed to Wikipedia is anchored by reliable sources. The problem is reliable sources is defined from an occidental point of view.

To put things in perspective, this essentially means that if you as a Himba wanted to submit an article to Wikipedia documenting a unique customary tradition this article would have to be substantiated by enough notable sources for it to survive Wikipedia’s unforgiving curators. Now, finding reliable source might not be a problem when writing about a particular butterfly in North America since Zoologist or historians have documented the landscape to near exhaustive limits but this is not the case for Namibia. A lot of Namibia’s history or indigenous knowledge is undocumented and what little has been written about it has been written from the view-point of Western settler intelligentsia that introduce a serious narrative bias.
Wikibias?
The entire thing is a Penrose step of never-ending issues, not only socio-technical but sometimes behavioural and cultural as well. With strong cultural underpinnings, we see local value systems clashing violently with those embedded in imported technologies. Perhaps I’m being too idealistic but when we leave this planet for the stars one day I’d like to leave with the wealth of its knowledge on a memory stick and I’m not just talking about knowledge on my favorite composer Frederic Chopin, but also how my ancestors made my favorite traditional drink Oshikundu. Currently, many research groups are experimenting with meta tools that make it easy for potential would-be editors to become frequent contributors. The declining retention rate of editors on English Wikipedia doesn’t help the faint glimmer of hope to encourage contribution to the sum of human knowledge. One would think that to overcome local challenges to the meagre repositories of Indigenous Knowledge we have to wait for a top-down solution but if M-PESA is anything to go by maybe we ought to find local solutions by co-opting a Western technology.

(Note the irony in all the wiki links in this post 😉 )

Oculus Rift: Are we finally ready for virtual reality 3D gaming? Again?

 

Oculus Rift
Oculus Rift

Occulus Rift is another one of those technologies I simply can’t say enough about. Virtual reality or VR has been a tripodal technology for the last two decades, staggering onwards as it struggles to find its place in gaming. So what is it about Occulus Rift as a virtual technology that is breathing new life into VR?

Rift is as I have chosen to call it due to its disruptive effects is a head-mounted display headset designed for immersive gaming. This means that it’s a contraption that you strap to your head to deliver a realistic virtual experience by looking through two lenses through which parallel images are distorted and projected. In principle the ocular experience of the Rift is similar to that of old stereoscopes. The kind you’d hold up to your eyes as a kid to see banal images of leaping ponies. To amuse yourself have a gander at one of the earliest attempts of the VR movement, The Sword Damocles.

Stereoscope view
Stereoscope view

As I alluded to earlier, Rift is a new product in a long line of VR head-mounted displays (HMD). Until now the most successful was probably Forte’s VFX-1 HDM that came out in the 90s. Most of the VR technologies never hit the shores of Namibia in the same way MS Flight Simulator joysticks and Golden China consoles did. Fortunately we didn’t miss out on much as earlier VR technologies failed in their native markets because they were too expensive, badly designed or posed as serious health risk in the same way people are prone to unintentional self-harm during a hallucinogenic trip.

Let’s avoid the technical brilliance that has made Rift a success and instead focus on the practicalities that are essentials for this kind of fun game design. The rift not much unlike its predecessors is svelte and weighs little despite its bulky appearance (370g). I had the pleasure of trying out an early prototype at a snowboarding expo and the headset doesn’t cause any more discomfort than you would experience wearing a hat. The thing that really sets it apart is low latency. The visual response to your head movements is almost instantaneous. Rift’s other compelling feature is its price. This is the most affordable and accessible advanced VR technology has ever been. The Rift is currently fetching for U$300 although I’d probably wait for a more consumer friendly version if I were you.

The Rift experience
The Rift experience

Looking at the winning factors of Rift you will realize that it’s not so much the features of the Rift that have made it a success but the fact that world has never been this ready for VR. There has been a convergence of virtual technology design and growing hobbyist\hacker subculture to go with it. That and the fact that software, hardware processing power and information are so readily available is why the Rift is our new light in the dark. VR is back in the hands of the gamers. I think just about everyone else abandoned VR while Palmer Luckey slaved away into the night. By the time people realized the implications of this technology and the temporal ripeness of the technology ecosystem, it was already too late. Luckey had emerged from his lair with the eyes of the future.

Occulus Rift
Occulus Rift

Occulus has made its SDK (free)  and dev kit (to buy) publicly available which means slews of hackers are going to tinker with it the same way they tinkered with Kinect. Although Rift is the sole contender in the VR race right now, the scale of its success will largely depend on how much the gaming development community want to include it as part of the normal gaming experience. It will also hinge on the extent gaming interfaces are willing to compliment the Rift.  So far Valve has committed to adopting Rift for Team Fortress 2, Portal and Half Life 2. I myself would delight at the chance to take a virtual trip around the world of Skyrim or the lush jungles of Far Cry 3. Rift or VR is not without detractors, head mounted displays have received their fair share of flak, watch this  panel of VC entrepreneurs tear Virtuix’s Omni treadmill a new one on Shark Tank.

There are physiological concerns that come with using Rift. Motion sickness and other adverse reactions need to be considered before Rift is rolled out to the masses. I can already hear the cacophony of angry mothers and girlfriends (or boyfriends) complaining about how the Rift trivialises the normal human experience. As exciting as the Rift might sound, I think locally we will see the same meagre penetration as Kinect. It will be a niche product for the rich and really techie before the kwaitos and the FIFA jocks jump on the bandwagon