Will We Define or Limit the Future?

9 thoughts on “Will We Define or Limit the Future?”

    1. Tim,

      I don’t think this is just Apple’s issue. It is an issue for the tech industry at large. Not to defend Apple here, but they do tend to hide the complex and make complex very digestible. And as the current brouhaha shows, even they get it wrong

  1. We need education and choice. Education so we can make a choice, choice so we can decide for us what’s the best/advantage and the cost associated with a decision.
    And no turning the GPS off, if one declines sending data is not choice. Getting delayed data and a worse approximation if I decline, is. It’s not up to a programmer to decide what’s best for me, that’s PC thinking which comes out out of the corporate world.
    Education is also necessary to decide if I want the advanced capabilities of new Software, context is all about the sensors and data integration, it should be able to provide me help and new experiences. Up to me to decide when and how much, again it’s not some boolean choice made by some know it all programmer.

    I personally took up Android programming since all the programs out there do not provide the integration I want. Then one finds Context as a content object. Well that’s better than Wave’s, enum. But…

  2. At the end of the day, it actually boils down to the company proactively taking the most ethical stand in terms of protecting their consumers.

    It would be crazy to assume that the consumer should always protect themselves, after all, they pay us for our products.

    Hence, more than human limits and human capabilities, it is about the product understanding it’s expectations as a human.

    This becomes even more and more relevant, as we see more and more robots entering the product arena.

    1. What is wrong with the word “unwell” it has been around since 1450s, and it is perfectly fine. Unless you are objecting that a male writer is using a word originally defined for “the period of the month that all childbearing age women suffer major discomfort.” (Mind you that amused me the most when Om used that term.. I just couldn’t imagine it being “OK.” *chuckle*)

      Anyways, in terms of the article I pretty much agree. We jump so quickly to new features and to new technologies that rarely do enough people actively stop and go, “What other impact could this have on me?” Sadly, it doesn’t take a PhD to see most of the other impact to privacy, etc. Just you have two groups. The masses that blindly don’t care or want to care, or the extremist that jump on every new thing being “evil.”

      When the whole “your iPhone is tracking you” articles started appearing on Mac Observer I pulled up the data, evaluated it, and pointed out that the data in question had to be cell phone towers as it was to regular and consistent to be GPS log paths. And I was ignored as people wanted to jump on the “***k Apple” bandwagon. When I also pointed out Apple had been doing this for Wifi for years people continued to ignore me.

      *shrug* I think the major pink elephant in the room is that people don’t care to be educated. They’d rather the facts fit their own view of the universe, and any facts that don’t are discarded or marginalized until they are sitting on top of their chest threatening to suffocate them.

  3. The problem is not education, and it is not too much, or too little choice. The problem is a much more basic failing of human cognition.

    A study on how we asses risk was done where people were presented with a “revolutionary new source of energy” was presented. The risks presented were actually the risk assessment for Natural Gas. Over 70% of respondents said that the new energy source was too dangerous to be widely deployed.

    Risks that are close to home are exaggerated, while distant ones are discounted. Risks associated with known quantities ( natural gas ) are easier to tolerate than unknowns ( New energy source ).

    Location risks are discounted until “OMG you mean MY phone?”. You cannot educate an indifferent audience, and you can not ever expect a corporation to behave ethically in these matters. The best you can possibly do is to ensure that the resources needed to understand the technology are available for those who are not indifferent, and that the companies operate transparently enough to identify malfeasance. We do a lousy job of both.

    It’s not that they are inherently evil, but again, corporate structures play into the gaps in human cognition, encouraging otherwise good people to make unethical decisions because “it’s business”, or because “my boss told me to” and a host of other responses that distance leaders and rank and file from the real ethical and personal consequences of their choices. We have all be deeply indoctrinated with the idea that Facebook is evil, or Blackwater is Evil, but that the people in them are not. This disconnection between the people that comprise a company, and the company itself is both dangerous and inappropriate. Evil, or to be less inflammatory, unethical companies are the products of unethical leaders – full stop. Leadership that institute unethical policies are acting in an unethical manner, and are thus unethical themselves. Companies that behave in a criminal manner are – by definition – led by criminals. The problem being, there are few levers that can move a corporation, and the corporate veil assures that the leaders that, in most cases, the men who ordered those policies cannot be called to account for their actions.

    Facebook/Google/Whomever can never be relied up to behave in an ethical or forthright manner, simply because the interests of the company, and it’s leadership ( responsible to their employees ), and the employees ( need this job ) run counter to maintaining the highest possible standards of privacy. Strong ethical stands by leadership or individuals are possible, and can keep a company from going too far astray, but again, cannot be relied upon. Even stand up guys like Brin and Page can be led astray. There are now over 30,000 people marching behind their banners – with whom does their ethical responsibility lie? Their customers, or their employees?

  4. The real problem is that it keeps surprising the technology industry that there is such a thing as the human factor. That after a period of “hard innovation” there is a phase of “soft innovation” in which people/society adopts and morphs the technology to their needs. Or rejects it.

    In scientific terms it’s the schism between technological determinism and social constructivism. In philosophical terms it’s the positivist mindset of technologists that hasn’t caught up to post-modernity.

    We need more Jan Chipchase’s (Nokia), more Genevieve Bell’s (Intel), more Paul Dourish’s (Uni of California). Anthropologists who understand this, and help bust the naive myths and ideologies that pervade this industry.

This site uses Akismet to reduce spam. Learn how your comment data is processed.