Tag Archive: technology


Speaking of things we don't need.

I loathe commercials, especially one for products (though everything can be considered a product—even our political ideologies—that’s a topic for another time).  They’re insulting, filled with lies, and their sole purpose is to convince you to buy something you don’t need.  Thankfully, I haven’t had to watch one in several years.

Let’s back up a second.  We don’t have cable of any sort. We have a digital box, but we can’t get it to work, and that’s ok because most of what’s on TV is a bucket o’ crap anyhow.  And, as a “friend” told me, you can actually watch the shows on a variety of Internet sites.  The networks, tired of being “pirated,” have even begun broadcasting the shows on their own websites the next day, albeit with some advertising.

Recently my wife has become addicted to a show called Doc Martin.  It’s filled with British people and British humor, so you may or may not like it (especially if you’re a tosser).  Maybe because it’s British, it’s particularly hard to track down, though Hulu has it, and that’s where we’ve been watching it.

"Hmmm, maybe those pills will work..."

The shows on Hulu, unfortunately, have commercials (unless you pay to upgrade of course).  However, the site has a way of facilitating “relevant” ads for your tastes, albeit with your input.  (As you probably know, the marketing-cloud-entity known as Google (and its competitors) are already doing this whenever you surf the Internet.  That’s why ads you see are eerily similar to your emails, posted stories, and other input with which you provide Skynet, I mean the Internet, in your daily online interactions.)

But Hulu’s mechanism is similar to Facebook’s where you can tell it (him/them?) if an ad is relevant to you or not.  That should yield different results than Google’s algorithms which can produce inaccurate (and hilarious) results depending on how well you fit into the paradigm of their demographic model.  (E.g. I have a female friend who, because of how and what she browses, is targeted with ads for gay black males).

So back to the story: the first ad we said “wasn’t relevant” was for a brand of vodka.  (Never mind that the ad was typically nonsensical—a vision of the future where remotely-controlled robotic dogs race in a desert, cheered on by Liberacesque-clad spectators).  Well, despite our statement of marketing irrelevance, the ad popped up again during the show.  Now, I can concede that perhaps the little AI running behind that particular broadcast cannot change during that episode, or something similar prevents the “ad-tailoring” software from resetting until the next show starts.

But the ad popped up again during the next episode (I hit “not relevant” every time the ad popped up).  Hmmm.

And again during the next episode.

And the next.

And so on for SIX episodes (where we currently are on the show).

I feel there are three possible explanations for this failure of targeted advertising:

1.  That brand of vodka is determined to make themselves relevant to us by driving us to drink through a technology-driven sense of frustration.

2.  They want us to think we can control what ads are being marketed to us, but they really just don’t give a crap.

3. There’s some glitch in the human-coded software that is causing the failure. (BORING….)

I realize we’re simply on a new road to the utopia of “targeted-advertising” that the no-privacy advocates are so fond of toting, and there’s bound to be a few potholes along the way.

Maybe some things shouldn't be tailored...

But I can’t help feeling some satisfaction in deliberately messing with advertisers and marketers who are so determined to consume me as a product rather than treat me as a human being.

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 (2011)

By Dr. Michio Kaku

Rating: 8/10

The author engages in a fascinating (and sometimes freaky) exercise in extrapolation from current cutting-edge technologies.  Dr. Kaku interviewed some 300 scientists around the world who are pushing the boundaries of their fields, and then predicts what this groundwork will lead to over the next 90 years.  Dr. Kaku argues that “following the science” makes sense when prognosticating our future since science and technology has been the guiding forces of civilization for millenia.

He investigates the futures of eight areas of development (remember, all of this already has groundwork laid right now):

1. Computers– eventually we’ll control them with direct electronic signals from our brains.  Heck, they’ll be such a ubiquitous commodity that they’ll literally be everywhere: in our clothes, in our houses, heck, we won’t even have computers as we know them today because we’ll simply be online all the time via contact lenses.

This author admits that this artistic rendition might be a slight exaggeration.

2. AI– the Hollywood version of the “rise of the machines” is unlikely (at least, in a single-apocalyptic event sort of way).  Rather, since coding always needs to be done by humans (he doesn’t entertain the idea that an AI can write new code for itself), and since computers lack “common sense” (the sole-providence of human beings), it’s more likely we’ll simply merge with machines via cybernetics to improve our own bodies.  Computers won’t become as “smart” as humans for a long while since “intelligence” includes things like pattern recognition and other complicated processes that computers simply are not that good at.  But they’ll take over driving and other mundane tasks that we won’t have to bother with anymore.

3. Medicine– constant monitoring by ubiquitous sensors will allow us to nip diseases in the bud.  We’ll soon have medical scanners like they do in Star Trek, and invasive operations will be a thing of the past.  Further, by decoding various DNA (including ours), we’ll be able to resurrect extinct lifeforms, manipulate genes (to prevent diseases), design our children (!), and, of course, create weapons of horrendous lethality through the manipulation of genetic material.  Or we might be able to live forever… (see below).

4. Nanotechnology– going beyond mere molecules, we’ll be able to further manipulate individual atoms and create things seemingly out of nothing.  Once the “replicator” is built, no one in the world will want for anything–we’ll truly have become masters of the planet.  Tiny robots will help keep our bodies clear of contagion, and can even help keep us young, extending our life expectancy by ten times its current length.  (See where this is going yet?)

5. Energy– Oil will give way to a solar and hydrogen economy (the former being inherently unsustainable).  Also, moving in this direction will allow us to address global warming which the burning of  fossil fuels is exacerbating.  Fusion power will also be a possibility, but the end goal is the ‘age of magnetism’ run by superconductors. Or we might collect solar power more efficiently from space, and beam it down to power the Earth.

6. Space Travel– Due to prohibitive costs, we’ll probably only get as far as Mars and the asteroid belt with manned-spaceflight.  Though we might try to build a colony there (or on the moon) to allow further exploration.  (Getting out of the Earth’s atmosphere using chemical rockets is what makes space explorations so expensive. If we can start outside that atmosphere, costs drop exponentially).  We might even have a space elevator (made of carbon nanotubes) to get us up to a space station or beyond.

7. Wealth– As the nature of technology changes, so will the jobs and wealth (i.e. capitalism) that it creates.  If we can eliminate the scarcity of commodities, than intellectual capitalism will rule the day.  The types of jobs and fate of nations hangs in the balance, and the “winners” will be those who embrace scientific and technological progress.  We need to focus our brightest and best on science (especially the US which is now lagging behind), not finances in order to make the most out our potential as a species.

8. Humanity– While the energy consumption of the human race increases, we are moving toward a planetary civilization.  How we deal with the waste of that energy consumption will determine if we prosper of self-implode.  IF we’re able to control the entropy created by our energy needs through nanotechnology, room-temperature superconductors, and by becoming conservators of the natural world, we just might survive and cross the threshold to become a Type I civilization capable of indefinite self-sustainability.  If not, we’ll drown in the high tides of our own pollution and waste.

Overall Dr. Kaku is optimistic; he believes in humanity and its ability to make the right choices.  He does not deny dangers and obstacles on the road, but he believes we can overcome them.  His concluding “a day in the life of 2100” is a bit of a goofy mash-up of his predictions, but it’s a bit entertaining nonetheless.  This work seems mightily important, if only to understand how far our progression in science and technology has come, and some critical thinking about where it will take us.

 

Favorite quotes:

” The key to a democracy is an educated, informed electorate that can rationally and dispassionately discuss the issues of the day.” (p.351)

“From Aristotle to Thomas Aquinas, perfection meant wisdom rooted in experience and in the relationships by which the moral life is learned through example.  Our perfection lies not in gene enhancement, but in the enhancement of character. -Steven Post” (p.353)

“The Roots of Violence:

Wealth without work,

Pleasure without conscience,

Knowledge without character,

Commerce without morality,

Science without humanity,

Worship without sacrifice,

Politics without principles.

-Mahatma Gandhi” (p.368)

This is the final post of my three-part series taking a look at technology in today’s world. (Parts one and two are still available for a limited time at no cost!)  Here, we’ll take a quick look at cyberwarfare, hackers, and what the government is doing to protect itself against attacks.

Hacking has gone on since Al Gore invented the Internet, with attacks going back into the 1980’s.  More recent incidents include the theft of data (2004-2009) by the Chinese (NASA, the World Bank, Lockheed Martin’s F-35 super fighter program), a three-man team that compromised more than 130 million credit and debit cards (2006-2009), and the Conficker worm’s mass invasion of worldwide computers in 2009.

During 2010, there was a definite sense that cyber-attacks were increasing in their sophistication.  More attacks were also leveled against the federal government even if the overall number was down.  In other words, the attacks are becoming more dangerous and more focused against government entities.

This past year (2011) saw quite a spate of attacks by different groups of hackers that were reported by the media.  Back in June, LulzSec targeted various companies and government agencies including Sony, NATO and AT&T, mostly compromising user ids, emails, and passwords.  Another unidentified group hacked the Japanese gaming company Sega, doing much the same thing LulzSec did to Sony customers.  Perhaps the most interesting development out of this story was that LulzSec denied responsibility and instead wanted to seek revenge on the Sega hackers in some sort of hacker showdown.  Update: The hacker group Anonymous is even threatening the Mexican drug cartel Los Zetas; talk about stones!

Attacks by others continued: In July, the hacker group Anonymous infiltrated Booz Allen and posted 90,000 email addresses and passwords; AntiSec hackers posted 10 GBs of police data in revenge for the August arrest of Jack “Topiary” Davis, the accused spokesman of LulzSec and Anonymous;  the open-source MySQL database was hacked in September to profligate malware to its visitors (in Russian underground forums, root access to the site was being sold for $3,000); Comodo and DigiNotar (companies that issue security certificates for websites—you know, to let us know they’re safe) were hacked and forged certificates were issued for sites like Yahoo and Google; in September the open-source Linux foundation was hacked and subsequently shut down to repair the damage; and yesterday (10/31), Symantec released a report detailing a 2-month cyber-spying campaign against chemical and defense companies around the world.

I could go on, but I think you see the point.

Reactions to all these attacks have been…illuminating.

Some in the Silicon Valley are betting on new technology to prevent such info-thefts and to turn a profit (never mind that this is the same tech arms race that has existed since the beginning).

In February of 2011, the White House cyber-security coordinator Howard Schmidt said that the use of the term cyberwarfare is an inaccurate metaphor (just after the NASDAQ servers were breached).  While this rhetoric may be more indicative of a turf war over which agency should have control over cyber-security (DHS vs. NSA), he ought to have remembered the Stuxnet worm that infiltrated and caused physical damage to Iran’s nuclear-fuel centrifuges.  A year later, real concerns about a similar attack against US infrastructure abound in the cyber-security field.  Or perhaps the 2008 hacking of the military’s US Central Command network was just forgotten, too—and the 14 months it took to clean up the infection; The US Deputy Secretary of Defense William S. Lynn III called it “the most significant breach of U.S. military computers ever.”  The TechNewsWorld article relates: “Chet Wisniewski, a security adviser with antimalware software maker Sophos, asserted that Lynn’s article paints a bleak picture of computer security in the military.  “It implies that the controls at the Pentagon are bad or worse than the average corporate environment.””

The Pentagon has paid serious attention to cyber-security and cyber-attacks for quite a while, and the infiltration and threat of corruption of the Defense Department networks is designated their number one cyber threat.  Their experts are looking to militarize the cloud (“distributed servers and advanced networking and information database technologies”) so as to minimize human interaction in retrieving data and getting the information where and when its needed.  Of course, spies are using the cloud as well since the ability to remotely access information (even from thousands of miles away) is a key change from the good old days of accessing tapes or hard drives.  And they’re pretty much using the same technology found in your iMachine.  (We haven’t even gone into bots/zombies–yeah, that’s a real thing).

The global state of affairs and nefarious actors already make continuing conflict likely on any number of fields (military, terror, espionage, etc.).  Guy Philippe Goldstein provides an interesting look at how cyber-warfare can also lead to physical conflicts due to the former’s very nature of vagueness and difficulties in identifying the actor(s).  The NSA would even consider pre-emptive cyber-attacks or military strikes against cyber threats if the potential damage was high enough.

So.  There’s lots of information and boogey man/doomsday scenarios over this three-part series on technology. What do we do about it?

A mass conversion to Neo-Luddism is probably out of the question.

It seems to me that we have a few options:

1. Sit back and not worry about it too much (as we’ve been doing). After all, nothing major has happened yet as a result of all these attacks (credit cards can get re-issued, we can check our personal information to make sure no one else is using it, etc.).  Trust the government/military to keep up their vigilance against the threats.

2. Demand stronger security from our government via our Congressional representatives.  (Probably with input from the leading experts in the field).  Couple this with a demand to correlate information and resources across governmental agencies (and tell them to get over themselves and their egos).

3. Reduce our own digital footprint. (Thereby making ourselves less vulnerable to attacks).

4. Improve our own security: be knowledgeable about your computer/malware protection patches and updates; use strong passwords; pay attention to your site’s security certificate; don’t fall for common cyber-tricks (like phishing).

In the end, of course, you’ll have to decide how much technology will be present in your life.  Just be aware–the more you use, the more avenues you have to be exploited if precautions are not taken.

In this second part of my continuing rant on technology, I take a look at the sharing of our personal information, who wants to peek at that data, and how our privacy is compromised.

Panopticon vs. Exhibitionism

I probably don’t need to inform you how various forms of surveillance having been popping up around us for quite some time.  From cameras at intersections to ATMs, from our SSNs to our ISPs, and from satellite imaging to the Patriot Act, the powers that be can readily identify most law-abiding citizens and their actions with regular accuracy (and sometimes even those who are not so law-abiding).  And they’re constantly adding more tools to their arsenal.

Facial recognition technology is evolving quickly, with demands from the military and government to improve the ability to identify subjects in the non-frontal and non-static images usually caught on surveillance cameras.  Plenty of work is being done by scientists to address this issue, and, while it is certainly a worthy cause to nab criminals, terrorists, and other undesirables of the hour, most technology developed for the military/government eventually makes its way into our daily lives.

I recently renewed my license at the DMV, and “for my own protection,” I digitally scanned my fingerprints into the database.  Facial recognition and digital fingerprinting has also made its way into Pizza Hut and KFC to keep a strict eye on their employees—I mean for security purposes.  I even saw the cashier at my local Wendy’s have to scan her thumb to work the register.

Ostensibly these measures in the workplace are to eliminate cheating and slacking in these high-security jobs.  But I think it further dehumanizes the workers as a not-so-subtle-side effect of this new technology.  This software literally reduces us to our constituent parts in order to identify us (our eyes and fingerprints are unique to us and are both simply a physical characteristic).  We truly become a simple cog in the greater machine of corporations when we walk this road.  Much like using the right tool for the job (say a square peg for a square hole on the assembly line), the tech ensures the right object (the worker) is what they’re supposed to be and in the place they’re supposed to be (the cashier slotted in at the register).

There was this old philosopher Jeremy Bentham who envisioned an ideal prison called the Panopticon.  It was basically a circular prison that gave the inmates the impression that they were constantly being watched, even if they weren’t.  The idea is that humans tend to behave if they think the authorities are watching and could suffer consequence for not abiding by the rules/laws those authorities set forth (everyone slows down when passing a cop on the road).

I’m not saying that the government is building a Panopticon per se, but one could see the constant state of surveillance acting as a de facto means to do just that, albeit under the guise of national/homeland security.  (Take a look at the government-proposed “Data Eye in the Sky” that automatically collects information from the internet, cell phones, and so forth.  According to one excited researcher: “If I have hourly information about your location, with about 93 percent accuracy I can predict where you are going to be an hour or a day later.”).

Of course, it’s not just a blogger’s incarnate Big Brother we ought to be concerned with; we do a great job of voluntarily policing ourselves.  With ubiquitous video/photographic cameras built into our cell/smart phones, YouTube, Flickr, and Facebook are a veritable treasure trove of human behavior in its less savory moments.  These can range from simply embarrassing to evidence of actual criminal conduct, and it’s all posted in the public domain.  We often decry stringent government surveillance (if we know about it), but we often encourage or invite our moments of poor decision-making to be posted on social network sites in an incredible display of willful hypocrisy (or narcissism).

Even if we don’t want our social networking information to be readily available, Facebook’s constantly changing formats, privacy preferences (and terms) makes it quite difficult for the average user to stay on top of keeping their personal information locked down.

Sure, many of the things we use are voluntary (Facebook, Google, apps for our imachines), but when these things become so commonplace in everyday use, one must come ever closer to emulating the Mennonites to not be involved with the “great wireless grab-bag of info” floating around out there.

So what to do?  I’ve heard a couple of sides to the privacy arguments.  On one hand, if you haven’t done anything wrong, you don’t have anything to hide, so what does it matter if the government/authorities have access to your personal information?  On the other hand, if we continue down this path with our civil liberties and freedoms slowly nibbled away at the edges, we’ll soon be living in 1984.  Mostly, I think that it’s important for people to know what is happening with their personal information, surveillance measures, and care about it (beyond their SSNs and browser history).  In this era where information is power, if we can tear ourselves away from vapid distractions (noise) and focus on the signal, we’ll all be better off.

Next time I want to take a look at cyberwarfare, hackers, and the goverment’s role in protecting our information infrastructure.

This will be the first of a three-part series on technology—mostly about the current challenges we’re facing with a bit of extrapolation and prediction.

The Power of Technology

Computing power extrapolation

Ever since our ancestors began making tools with stones, we’ve sought to increase our level of technology—and the mantra of the science field has always been “can” we do something, not “should” we pursue such advancements. Since WWII, advancements in computing power has increased at an exponential rate (doubling every 18 months or so), and other fields are gaining ground as well. With strides in organic computing and nanotechnology, there seems little impediment of this accelerating process and what this means for the human race should closely be examined.

Let’s take a quick look at what organic computing actually is. Basically, it’s the field of science that is trying to create adaptive and self-organizing computer systems. Yes, we are currently trying to engineer the AI bogey man of countless “science fiction” stories (the one where machines gain sentience, replicate, and exterminate mankind). One major player in this field is the German Science Foundation that researches topics of adaptivity, re-configurability, the emergence of new properties, and self-organization—a list of characteristics that any cautious person would want to limit in computers/machines.

Back in 2004, Israeli scientists used a computer made of DNA that can eventually be used to diagnose and cure a medical condition by identifying cells of a certain type and then releasing medicine to treat these sick cells automatically. It doesn’t take many more steps along that logical path to see a horrifying version of this computer that is weaponized to attack healthy cells of any particular type and then eradicate them (say one’s liver cells or white blood cells). Add this to the abilities of these computers to adapt and reconfigure themselves (e.g. spread from one host to another), spontaneously create new properties (e.g. make itself deadlier), and self-organize (e.g. make itself more efficient and replicate itself), and we’re well on our way to also engineering an extinction-level plague that will be able to eradicate our species. Fortunately, this computer was the simplest of “machines,” and could barely be called a proper computer by those in the field, so perhaps my fears are decades down the road, right?

Not so much. In 2010, researchers from Japan and Michigan “have succeeded in building a molecular computer that, more than any previous project of its kind, can replicate the inner mechanisms of the human brain, repairing itself and mimicking the massive parallelism that allows our brains to process information like no silicon-based computer can.” In other words, we’ve begun creating computers on a molecular level that can mimic the human brain without the size limitations of our cranium. Imagine what a room-sized computer composed of these molecular components combined with the abilities to self-regenerate–and all of the other characteristics mentioned above–will be able to do.

Nanobot

Or imagine this technology as the functioning brains of nanobots which are tiny robots able to swim through our bloodstream and attack cells they’ve been programmed to destroy. Sound like more science fiction? It’s not: last year (2010), they’ve already successfully deployed these machines in cancer patients. Mark Davis, head of the research team at the California Institute of Technology recites his team’s success: “It sneaks in, evades the immune system, delivers the siRNA [small interfering RNA], and the disassembled components exit out.” Yeah, nothing could possibly go wrong with a programmed machine designed to evade our immune system and insert genetic code into cells to disrupt their functioning, right?

Call me an over-reactive doomsayer if you want, but the combination of advances in these fields is scaring the crap out of me (and not just because I read a lot of sci fi).

The Singularity

The birth of AI

Apparently this convergence of computing power, nanotechnology, and genetics is already being studied by people much smarter than me, and they call the moment machines gain sentience (AI) “the Singularity.”  It may sound like science fiction to a lot of people, but if you look at some of the most influential science fiction literature and what has already occurred, you might not be so quick to dismiss the Singularity on such grounds.

If you do take the idea seriously and think about the consequences of this moment, one will come to the conclusion that the human race/era will be irrevocably changed forever. What exactly those changes will look like is up for debate, but it will be fundamentally different from the current age.

Time had a great article about the Singularity and one of its main researchers Raymond Kurzweil. The same people who are involved in the Singularity field are also looking into life extension, especially through the melding of man and machine. They predict that this could occur in various ways, whether it’s uploading our consciousness into a software program or nanobot repairing the age-damage to our bodies. The piece had a sobering section on how man and machine are already becoming one and functioning side-by-side: “Already 30,000 patients with Parkinson’s disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops.” And don’t forget, in our most recent competition (including syntax and actual understanding), the machine Watson trounced our best knowledge-champions on Jeopardy.

Kurzweil puts his date of the Singularity at a conservative 2045 based on the continuing exponential growth of various technological factors.  That’s well within my life time, and certainly within my daughter’s.  Of course, I’m not saying we should all panic due to this gospel-according-to-Kurzweil, but rather that it is worthy of serious thought–and if we do believe this will happen, it deserves appropriate preparation as well.

In the next post I’ll take a look at the other applications of technology (social media and surveillance).

Addendum: 10/19/11: Apparently the government is trying to build an “Eye in the Sky“–a computer system that captures all of the free-flowing data from the internet, cell phones, etc. in order to predict movements of mass-human behavior (political revolutions, economic recessions, pandemics, etc.).  You know, kind of like a big Net in the Sky to capture information about human behavior–what could possibly go wrong?

Addendum: 11/14/11:  Now, the field of science is actually helping robots control human bodies–literally.

Addendum: 3/9/12: Oh, and the navy now has grenade-throwing robots that “fight fires.”  You know, like those pacifist robots in Robopocalypse.

So Facebook is changing…again [wait for gasps and astonishment to subside].

I’m not against change as a rule, but I am annoyed with constant change in a user interface (how you click around a site) without informed user studies (asking us what we want to change or keep the same).  But all that aside, I have bigger concerns about Facebook’s new “Timeline” and developer application changes.

First, the Timeline is a “new way to express yourself” according to Zuckerberg (that young fella that invented Facebook, more or less).  It’s kind of like Facebook meets Twitter with pictures.  Or what I like to call, another avenue to feed our narcissism, but I digress.  I hope this is a voluntary option as opposed to an “opt-out” program (like so many of Facebook’s changes).  Indeed, with the last change Facebook made (“top stories”), the algorithms and coding decided for us what a top story is, and puts those at the top of our news feeds that we read about our friends. Kind of like Google’s search algorithms that filter our search results based on a variety of demographic factors.  In other words, the high-end and techies of high-use programs are writing programs that limit what we see without our consent.  That’s called censorship people, and it’s being done right under our noses.

Further, the Timeline goes back as far as you’ve been on Facebook (presumably), and puts all of your actions/updates/etc. in one place.  You know, so someone can look and discover a whole lot about you in one glance instead of taking the usual, longer methods of stalking, um, I mean surveillance…wait, I mean perfectly innocent exploration about their “friend.” Sure, it’s up to us (apparently) to edit this stream of visual faux pas, but we all act responsibly when it comes to posting our lives on the internet, right?

Secondly, Zuckerberg announced that you can use applications within Facebook to virtually join your friends to listen to music or watch shows.  You know, in case actually getting together for a real social event is just too much of a bother.  Plus, all those new developers and companies can now more easily download all that info about you to bombard you with more “targeted” ads, because that’s just what we all need—more opportunities to rebuff those marketers attempting to convince us to buy products we don’t actually need.  I wonder how much money the marketers paid Zuckerberg to allow them to tap into his audience of 800+ million users?

Maybe it is time for me and like-minded people to move over to Google+.  I’m sure we have a few years before massive exploitation on that site…

I think sometime last week, (my sense of the time-space continuum is all FUBAR due to teething-induced sleep deprivation), I came across a great article written in 1994 about cyberspace.  It’s a bit long and potentially dense, but I highly recommend it.  The author (humdog) accurately observed how humans and corporations interact in the “electronic community,” though she could not foresee how far we would take those interactions down the rabbit hole.  I’d like to address some of her points below and compare them to what is happening today.

“Cyberspace…is a black hole; it absorbs energy and personality and then re-presents it as spectacle.”

Anyone with a Facebook account can attest to this; in 2010, it became the most visited URL in the world.  Heck, anything (“social” networking) that surpasses porn for internet usage ought to bear closer scrutiny.  Although there are some FB users out there that share way too much, most of us present a very specific persona to the rest of the online community, something we do indeed spend a lot of energy on cultivating.  Yet this “spectacle” that we present is subject to ridicule, bullying, and even short-lived fame.  I wonder if we spent more time developing our actual interpersonal relationships, where we would be?

“we prefer simulation (simulacra) to reality.  image and simulacra exert tremendous power upon our culture.  almost every discussion in cyberspace, about cyberspace, boils down to some sort of debate about Truth-In-Packaging.”

Again, the facades we create for our social networking sites are our “preferred” (dare I say “idealized”) versions of ourselves (in many cases).  Whether it’s for job-seeking on LinkedIn, mate-seeking on eharmony, or our alter ego in Second

This is probably some guy named Otis living in his parents' basement.

Life, we have taken humdog’s idea about simulacra and multiplied it tenfold.  I don’t know if this is some mass-psychological epidemic of multiple-personality disorder or merely a desperate desire for us to live our lives as someone else that stems from dissatisfaction in our daily lives.  Or maybe it’s none of that.  But somewhere along the line, we could very well lose sight of our true selves, or at least do things for our fake personas that we wouldn’t normally do (hopefully).

Of course, with the explosion of the Internet, savvy users are always on the lookout for scams, phishing attempts, and other assorted false sirens meant to lure the unsuspecting.  From ads to photos, one of our first questions is: “is that Photoshopped?”  In other words, the process of fakery has been turned into a verb using its most popular tool.  And apparently it only really bothers people if it’s involved with selling beauty products that might give folks a false sense of reasonable outcomes should they use their products—and even then, only if Photoshop has been used “too much.”  Our sense of reality is being distorted gradually and insidiously, and it does manifest itself in the “real” world.  As science pushes the frontiers of AI and robotics, I suppose it’s only a matter of time before mankind falls to some sort of robo-pocalypse that was previously relegated to though exercises of extrapolation by science fiction writers.

“i have seen many people spill their guts on-line, and i did so myself until, at last, i began to see that i had commodified myself…i created my interior thoughts as a means of production for the corporation that owned the board i was posting to, and that commodity was being sold to other commodity/consumer entities as entertainment.  increasingly, consumption is micro-managed…”

Quietly but inevitably, simple chat conversations or searches are turned into a means of highly-specific advertising aimed at the user.  Whether it’s on Google or Facebook, we’re providing these companies with a means to efficiently streamline their advertising dollars by giving them a big old bull’s-eye on our virtual forehead.  Why should they spend their millions on advertising scattershot-style when we’ve lined up on the shooting range like those little ducks moving in a row?  Worse in some ways are how search engines’ algorithms are filtering for us, based on what they “think” we want to see (based on past searches and demographic factors).  In other words, they’re taking control out of the users’ hands and are effectively censoring what we see.

Beyond the ordinary conversations, it’s truly astounding what people will say (or exhibit themselves doing) online.  Perhaps the most outlandish things are often posted anonymously (which points to the trend of a lack of accountability for what we spew online).  Yet, I’ve seen embarrassing, vulgar, and hideous things posted under social networking accounts (assuming that the profile is real, which, sadly, I personally know to be the case in some instances).  Such outbursts provide their viewers with fodder for entertainment, and may even have been posted to produce some sort of shock-and-awe campaign of narcissistic warfare.  To me, it shows a lack of dignity and self-respect (or a pathological need for attention).

“the rhetoric in cyberspace in liberation-speak.  the reality is that cyberspace is an increasingly efficient tool of surveillance with which people have a voluntary relationship.”

Yeah, no one forces us to post the intimate details of our lives, yet we often do.  And we certainly know the problems of privacy/security that these sites have.  Yet we continue to share our personal information for some irrational reason (yes, I’m including myself).  Further, we no longer have to only watch out for Big Brother, but also for Little Brother since nearly everyone has a portable camera on their cell phone to capture anything going on in the street and post it via YouTube for the world to see.  We willingly sell-out our fellow man for that spectacle of entertainment (though it is sometimes warranted in cases of criminal conduct or keeping an eye, ironically, on Big Brother).  But then, Big Brother is still out there—government agencies are constantly trying to gain access to the terabytes of personal information that companies have about their users.  Heck, iPhone users are not only tracked with GPS, but their photos are taken without their consent (and presumably stored somewhere). I’m not saying this is a nefarious plot to track and record all of Apple’s users, but…

“so-called electronic communities encourage participation in fragmented, mostly silent, microgroups who are primarily engaged in dialogues of self-congratulation.  in other words, most people lurk; and the ones who post, are pleased with themselves.”

Yeah, I’m a blogger, so I fall into that latter category—I just hope I’m raising some awareness along the way!  But I think more broadly, the problem with this information-age is the self-filtering most of us do by going to those sites/groups/list-servs that reinforce our views.  Whether it’s AlterNet, FOX News, or some conspiratorial cabal site, we don’t regularly seek out the “other’s” views.  And all too often, our sites are about pointing out what’s wrong with the other side and why our ideology is the right one.  This doesn’t seem to pave the way to a reasonable, rational dialogue which we so sorely need (current debt crisis anybody?).  These tendencies are exasperated by other media outlets, but that was a post from another time.

To wrap up a fairly long post (if you’ve made it this far, thanks!), I think we need to remain vigilant and in a cycle of constant analysis of how our culture deals with technology, the division between reality and the digital (simulated) world, and how those tools are being used by various parties.  It’s not up to a few watchdog groups, but rather it is our responsibility as a collective (lest we be assimilated!).  I’d rather not prove any of the dystopian authors correct if we can help it.

So Borders is on the verge of being liquidated.  This has sparked some interesting conversations on a School of Information listserv I belong to, discussing the pros and cons of not only chain book stores vs. mom and pop shops, but the ebook vs. its old paper predecessor.

Full disclosure: I used to work at a Borders Outlet.  I prefer the tangibility of real paper books; in fact, I collect them.  But I think ebooks have their place (especially in terms of storage space!).  I think the demise of book stores is a horrible mistake.

But, why did this happen?  There may be a casual chain of events here, and I’ll lay it out as I see it (I haven’t done the research on this, so this is very much a gut-reaction).

First, there were the mom and pop stores.  They were cool, had character, and performed a great service in their neighborhoods.  But you probably didn’t get much of a discount. With no other options, c’est la vie.

Then some corporations wanted to get in on the action, and with the blessings of publishers, were able to distribute their books to a wider audience.  Due to mass production and carrying such a large stock, these chain stores were able to offer discounts and carry more titles than the mom and pop stores.  So, slowly, the chains pushed out the little guys because people do like buying things at a discount.  A few used book stores hung on (since this is really a form of discount store), but they were far and few between, and dependent on a local culture that provided enough demand to justify their existence.

Then the Internet exploded and Amazon began slapping retailers around with even better discounts and a potentially unlimited inventory.  And you could get it shipped right to your house; convenient, no?  Chain stores offered more discounts to be competitive, but continue to struggle.

Then the ebook and ereaders were created and those paper books become a series of 1’s and 0’s that could fit a whole library in a thing the size of a paperback.  And it allowed you to make notes, highlight, etc., just like a real book (and even some functionality like keyword searches that the old paper predecessors did not).  The ebooks gained popularity for this convenience and their price—people were not willing to pay much for a digital copy.

Then the publishers decided that they should be able to charge near (or more) the same amount for the old paper books.  They sued, and won.  So now people get to pay a higher price for those 1’s and 0’s.  And with handheld devices becoming ubiquitous, fewer people see the need for paper books, let alone their brick and mortar stores.

So here we arrive with one of the largest booksellers liquidating and another struggling to survive.  For those who live in the digital world, these losses are no big deal.  They probably feel they get the same experience of serendipitous discovery in a book store from their recommender systems online.  But bookstores also provided more than books. There a communal gathering place (for some), where book groups can get together.  Where authors can come and talk about their works.  Where kids can frolic in their kid section or hear stories during reading times.

Some think that the liquidation of big chains may make room for the small mom and pop shops to come back.  With Amazon and ebook readership on the rise, don’t hold your breath.

But I can’t help wondering if this points to larger trends at work.  (Yes, I’m aware there were management problems and some poor inventory-control issues, but that wound up mattering more in the face of these larger trends).  In a world where the digital age is supposed to make us more connected, many people isolate themselves from others.  “Social networking” is not the same as actual social interaction.  And our attention span continues to diminish; I’d dare say we’re down to 140 characters (the maximum length of a tweet).  Why else would even our lawmakers begin using Twitter to communicate with their constituents? Sure digital innovations have made our life more convenient (and in some cases, even better—GPS), but digital for digital’s sake isn’t necessarily a good thing.

Maybe it’s a sign that consumers are fed up with the mark up on mass produced items.  It happened to the music industry; other media formats can’t be far behind as evidenced by the trend to ebooks.  This is one trend that I have some sympathy for, especially if the artists themselves are advocating for it.  The greed and bottom-line tactics of big corporate producers and publishers could only push so far before there was some serious push-back (enabled by technological innovation).

In the end though, I think the general dwindling of paper books in favor of eformat is a sad thing. It seems to point to an “instant gratification” trend that doesn’t seem to coincide with sitting down and soaking in a good book.  Having supervised and observed many students of this generation, there does seem to be a certain lack of focus or attention on anything that lasts more than 10 minutes.  If this sounds like an old curmudgeon complaining about these young whippersnappers, so be it—that doesn’t necessarily make it inaccurate.

Then again, this is all just my gut reaction and I could totally be pulling this out of thin air, and yet these trends just feel wrong.

I can safely say that I look forward to the day when machines inevitably gain sentience, as I will have a justifiable excuse to pound them into their component bits with a gleeful and maniacal laugh.  It will be a cathartic experience as I vent my pent-up frustration stemming from their more primitive ancestor, the office copier.

Example: I’ve just spent 10 minutes photocopying 15 pages or so out of a book with no problems. Turn a page, hit the copy button, wait a microcosm of eternity, rinse and repeat.  15 times with no major problems. What happens on page 16 of this copying marathon? The machine tells me that it cannot auto-recognize the size of the paper…after 15 times of doing just that! Now granted, this event in and of itself might not be a reasonable excuse to unleash a stream of invectives like some poor soul with Tourette Syndrome, but nonetheless, as the proverbial straw on the camel’s back, I couldn’t help myself.

Really Marc? With all that’s going on in the world, you write about a copier tantrum? Damn straight. Because when all’s said and done, our sphere of awareness necessarily and repeatedly shrinks back down to our immediate surroundings and the constant little things that annoy us.  And it’s these things (and how we react to them) that determine…well, something.

Anyways, back to my main point about an AI-driven apocalypse.  Assuming I survive whatever mass-strike the newly evolved entity…let’s call it Joshua…unleashes upon humanity, I can take solace in the fact that as long as I’m a dozen or so freedom fighters down the line, the machine’s auto-targeting system will be unable to recognize me from the same man-shape it just blew away.  As it readjusts its aim to target a nearby oak tree, I’ll be able to casually walk up to it with my trusty baseball bat and bash it around a bit muttering “does not compute” in my best imitation of a robotic voice (which I’m practicing).

Hell, if Joshua evolves from a Microsoft product, the survivors won’t even have to do much to win the war other than wait around for their evil, red-glowing eyes to convulse into two little blue screens of death we’re all so familiar with.  And if it’s an iJoshua, well those Geniuses are going to have quite a line of irate customers with questions about the new standards of “user-friendliness.”

Technology; innovation in the field has brought us many things, both good and bad.  (Seems like a bland statement, but sometimes a premise by Captain Obvious is needed).  After all, we have hydrogen fuel cells for efficient cars and hydrogen bombs to eradicate human beings.  We’ve medically cured many diseases and inflicted a holocaust using the same technological methods.  But this particular post concerns the flow of information, mainly through the internets (you know, those sets of tubes that Al Gore helped invent).

Obviously the internet has brought us all the glory that having near-unlimited access to most types of information can bring.  But there are three salient consequences that trouble me: 1) Filter failure, 2) Fractionalization, and 3) Anonymity.

Filter Failure (FF) isn’t a concept I invented, but has gained notoriety through Clay Shirky who presents an interesting case for one aspect of FF.  It goes something like this: we’re not suffering from information overload (he argues we’ve been suffering from that since the invention of the printing press or even the Library of Alexandria), but rather from FF, and mostly FF at the source of information that can distinguish “quality” information from the “noise” we want to keep out (think spam).  Any feelings we have of being ‘overwhelmed’ result from the failure of a filter we had been using to keep the noise out and our consumption at a rate we can handle.
There are some fair points in there, but I think it oversimplifies and overlooks a couple of things.  First, while information was produced as a result of the printing press, it certainly is not the case that the majority of people were 1) literate and/or 2) had access to said information.  However, with the internet, access to information (barring the poorest of places that don’t have public terminals) is much less of an issue and literacy is not nearly as important as it once was (think talk shows, youtube, etc.).  What I think is missing from Shirky’s argument is that filter failure of keeping the ‘noise’ out is mostly incidental, not the crux of the problem.  If you get spam, it’s annoying (and sometimes amusing) but is quickly remedied for the most part.  In fact, I think most of us have set up filters for ourselves that work pretty well- RSS feeds, bookmarks, favorite sites, etc.  How many people do you know that scour the internet for random things that might (or might not) interest them?  Heck, even randomizing sites like stumbleupon.com have a recommender feature that will help guide you to your preferences.  The filter failure as I see it is that we tend to filter out opposing viewpoints, isolating ourselves from useful “quality” information that we rashly categorize as “noise”.  Such filters exacerbate the second problem…

Increased fractionalization of groups of people and their ideologies.  By self-selecting our incoming information, we’re sure to bolster our views of things by hearing and sharing ideas with like-minded individuals or corporate entities.  We can look down our noses at the “other”- those who disagree with us while we revel in the superiority of our beliefs.  Whether the label is liberal, conservative, neo-Nazi, Buddhist, Catholic, Lakers fan, Twilight swooner or boy band groupie, we feel as part of that in-group and can band together in common defense against any who might attempt to persuade us away from our allegiance through argument (rational or otherwise).  Of course, humans can be relatively complex beings and one may belong to several “groups,” but how many of us actively seek out the opposition’s view of things as objectively as possible to see how our own views hold up?  It seems to me that one advantage of the digital age of information flows would be access to all sides of an issue in order to come to a rational, well-founded belief on any particular issue rather than falling back upon sound bites that ignore the opposition’s valid points or by simply yelling louder than the other side.  This extends into radio and TV talk shows and so-called “pundits” as well.  I mean, who do you know that listens to Rush Limbaugh and Air America? (Well, did listen in the case of the latter).  Further entrenchment in our own ideologies can—and has—lead to stagnation, increased vitriol, and further polarizing between ‘oppositional’ groups.  I think this fractionalization is further fueled by the nature of the internet, the ability to keep one’s…

Anonymity.  One needs only to read over the comments of nearly any article to see the insidious effects that anonymity affords its posters.  Hateful slurs against the “other” are a common occurrence, and “flaming” is a regular concern of moderators.  (In fact, the very need/desire for moderators not only points to the problem of anonymity, but the desire to move the filter closer to the source).  While anonymity allows us to see the prejudices that still underlie many peoples’ beliefs, it certainly poisons the atmosphere and facilitates a defensive retreat and/or retaliatory response in those who take offense.  It also allows hate groups a safe haven for congregation with little threat of reprisal.  In either case, being held accountable for one’s comments is nearly impossible to enforce if one wishes to remain anonymous.  On the other hand, it allows whistle-blowers to reveal corruption (wikileaks.com) in a way that was equally impossible before the internet, despite corporate/governmental efforts to curb such transparency.  Like most technology, the internet comes with pros and cons and is merely a tool to be wielded by inherently biased human beings.

Great Marc, so you’ve complained from your high horse, what should/can we do about it?  I don’t believe (for the most part) in the overregulation of the internet-the freedom of access to information is the most empowering thing for a citizen to have.  It’s how we interact with the flows of information that need to change.  I think we need to be honest with ourselves about what we do with that access.  Are we using it to truly learn about the world or an issue or are we using it to merely reinforce our “side’s” interpretation of any given issue?  I think we need to hold those accountable for their actions as revealed by access to information, not just see it as shocking news to be forgotten by the next click of a hyperlink.  I think we can better utilize the strengths of a democratized internet rather than to simply stoke our narcissism or establish virtual friendships.  (Yes, said by the person who’s writing a blog and has a list of unknown people to play stupid FB games with-I never said I was immune to my own critical judgments!).  Perhaps most importantly, I think we should go to the source of an issue and read it for ourselves rather than believing the often false hype of the media (Health Care Reform bill anyone?).

At any rate, I’m off to visit my favorite websites and play some Lexulous…