Share on Social Media:

BLADE RUNNER: IS THIS YOUR FUTURE?

Image result for blade runner

In Blade Runner, a 1982 science fiction movie, large corporations control nearly everything. The individual is almost powerless. It’s virtually impossible to hold anyone accountable for anything important, because decision makers are faceless and remote. Bureaucracy pervades every facet of life.

Some people argue that the hellish vision in Blade Runner is our future. Gigantic corporations will consolidate their control over our economic life.

Such predictions may seem to be credible. Certain corporate giants, such as Facebook and Google, threaten to acquire near monopolies in their markets- and in control of information. Microsoft, Apple, General Electric, and Exxon are still among the world’s largest firms. If present trends continue, can you keep your independence? Is a Blade Runner type of dystopia inevitable?

In the past, size was a decisive market advantage. Giant corporations owned infrastructure, industrial machines, and factories. They owned distribution networks. They could produce much more than smaller businesses could. Their expenses were spread over a larger number of units. It was much easier to organize production within one firm than among many. In the Machine Age, massive size made sense.

Is this true today? Will it be true in our future?

It might not be. In the Information Age, the advantage of size is not as great as before. Some of the means of production, previously out of reach for individuals and small businesses, are much more accessible. Anyone with the necessary skills can write a new app. With only a computer and a web connection, he can make and sell his products from home.

Bringing new industrial products to market is no longer the exclusive domain of corporate giants. With about $20,000, you could buy a router, a CNC machine, and a 3D printer, and they’d be almost as accurate as the ones owned by industrial giants. If you can’t afford your own machines, you can rent time on someone else’s. You could even rent a factory instead of building your own. This can be true of large scale production, not just product development. Some computer chip designers have been renting capacity in chip foundries owned by others.

The Blade Runner may not have been prophecy. For every centralizing economic trend, there is a decentralizing trend, so we are not doomed to a miserable future of domination by giant corporations. In the future, we may have greater control over our lives.

We will say more about this in another post.

(To take control of your economic future, you need a reliable internet connection. If you don’t have one, talk to us. We can help.)

Share on Social Media:

CHROME BLOCKS FLASH

chrome_flash_prompt

Unless you’re a masochist, you hate the Adobe Flash Player. If Google Chrome is your browser, though, you’ve had to live with it anyway.

This is about to change. For at least a year, Google has planned to replace Flash with HTML5. Yesterday, Google publicized several details of the plan, which includes blocking any Flash content that loads ‘behind the scenes’-about 90% of the Flash content on the web- beginning in September. In December, HTML5 will be the default player for games and video, except on sites that support only Flash.

The Flash Player has been in decline for several years. Its slump has only accelerated recently, and is likely to continue. In addition to Chrome, Microsoft’s Edge and Mozilla’s Firefox browsers are planning to reduce or eliminate their use of plug-ins like Flash in favor of HTML5.

Though Flash is still incorporated into the Chrome browser by default, Google has been steadily reducing its scope. In September 2015, Chrome 45 began pausing “less important” Flash content automatically. This “less important” content is chiefly animation, ads, and anything else not “central to the webpage”.

Flash is widely reviled for slowing the loading of requested content, consuming too much data and memory, radically reducing battery life, and being dangerously insecure. New vulnerabilities seem to surface every few weeks.

Once Google makes HTML5 the default player for Chrome, Flash will be available only for the websites that run only on Flash. Visitors to such sites will be prompted to enable it, and will be given the options for it: run once, always run, or never run (see the enclosed image).

Chrome is the most popular web browser on the market. According to the federal government, it handles more than 34% of all website visits. Internet Explorer (now Edge) is in second place, with just over 28%. Apple’s Safari is in third place, with just over 20%. Firefox is fourth, with 11%.

(Regardless of what browser you use, you need a reliable internet connection. Talk to us. We can help.) 

Share on Social Media:

TRAINING YOUR COMPUTER- LIKE A DOG

To most of us, computer coding is an inscrutable art. Code writers are the high priests of the Information Age, a technical elite whose work is so far beyond our understanding it seems to be magic. They even speak a different language.

This may be changing. With recent advances in artificial intelligence, your next computer might not need written software or OS code. Instead, you can look forward to training the machine- like a dog.

Conventional programming is writing of detailed, step-by-step instructions. Any errors or omissions in the code will affect the computer’s functions– and errors cannot be corrected without rewriting the code. Operating system developers, most notably Microsoft, often have to issue downloadable “patches” to repair defective code. Some systems, such as Windows 8, are so bloated and error-prone that they are beyond salvage, and have to be withdrawn from the market. The coding protocol is unforgiving. “Garbage in; garbage out”, is an industry watchword for a reason. The computer cannot learn, and cannot correct its mistakes. It can do only what the code has taught it to do.

With machine learning, your computer won’t be coded with a comprehensive set of instructions. It will be trained, and you very likely will have a big hand in training it. As Edward Monaghan wrote for Wired, “If you want to teach a neural network to recognize a cat, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands… of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.”

Machine learning has been with us, in concept, for several decades. It has become practical only recently, though, with revolutionary advances in the development of neural networks, systems modeled on the complex array of neurons in the brain. Machine learning already shapes much of our online activity. Skype Translator translates speech into different languages in real time. The collision-avoidance systems in self-driving cars are neural networks. So is the facial identification feature in Google Photos. Facebook’s algorithm for adjusting user news feeds is a neural network. Even Google’s world-dominating search engine, long a monument to the power of the human coder, has begun to depend heavily on machine learning. In February, Google signaled its commitment to it by replacing the veteran chief of its search engine with John Giannandrea, one of the world’s leading experts in neural networks and artificial intelligence.

Giannandrea hit the ground running. He has devoted Herculean effort to training Google’s engineers in machine learning. “By building these learning systems”, he said last fall, “we don’t have to write these rules anymore.”

Our increased reliance on neural networks will bring radical changes in the role and status of the programmer. The code writer understood precisely how the computer functioned, since he wrote every line of its instructions. It could do nothing he hadn’t told it to do. With machine learning, though, he’s not entirely sure how it performs its assigned tasks. His relationship with it is no longer that of a god exercising absolute rule over his creation; it’s more like the relationship between parent and child, or a dog owner and his dog. Such relationships always entail a certain amount of mystery.

Your computer’s training will not end with your purchase of it. You will teach it what functions you want, how you want them carried out, even the quirks in your personality. It will get continually ‘smarter’ as it adapts to your feedback. You will be training your computer for its entire operating life.

Danny Hillis, writing for The Journal of Design and Science, said, “Instead of being masters of our creations, we have learned to bargain with them, cajoling and guiding them in the general direction of our goals. We have built our own jungle- and it has a life of its own.”

(Training your computer will require a reliable internet connection. Is yours adequate? If it isn’t, talk to us. We can help.)

Share on Social Media:

Will robots replace us in the labor market? With accelerating automation, it may sometimes seem that our jobs are doomed.

Robots deliver pizza. Google has developed cars that drive themselves. This is only the tip of an emerging iceberg.

Two years ago, Momentum Machines developed a robot that could provide freshly ground and grilled hamburgers to order, with freshly sliced vegetable toppings, and customized meat or seasoning combinations. If a customer wants a meat patty with one-third bison and two-thirds pork, the robot will provide it. And it can produce 360 custom burgers per hour.

A few years ago, the Los Angeles Times began using an artificial intelligence application to write weather and earthquake updates. Afterward, the AI app wrote sports articles. The newspaper tested the app by asking readers to compare articles written by the robot with articles written by human reporters. Very few could tell the difference.

If these examples aren’t daunting enough, some researchers believe that artificial intelligence, the internet of things, and virtual reality will make most human jobs obsolete within a decade or two. Robots, we are told, will handle so many of the tasks that now require human labor, very few jobs are likely to survive. Machines will be able to learn, and will constantly become more competent. Eventually, they will know so much that they won’t need human supervision. Some analysts argue that we’ll need a universal minimum income, so the hordes of displaced workers can survive.

These frightening prophecies, though, are out of touch with reality. We’ve been through technological revolutions before- and they’ve paved the way for more jobs, not fewer.

By inventing mechanical molds and the movable type press, Johannes Gutenberg drove thousands of European scribes out of their vocations. But his invention created new industries. It made the mass production of books and pamphlets possible, and without it the newspaper industry would never have existed. The movable type press killed thousands of jobs, and created millions more.

The automation of agriculture was even more disruptive to labor markets. In the nineteenth century, four out of five American jobs were on ranches or farms. Today, fewer than 3% are. Automated farming freed millions of people for other, less onerous work at higher wages.

We are at the verge of the next great leap in technology. It will, no doubt, destroy tens of millions of jobs. Some workers are likely to be displaced for months, some for years. Transitions to the new information-based economy are going to be difficult. For every job the robots destroy, though, they’ll create several more. A 2011 study by the International Federation of Robotics found that the use of one million industrial robots led directly to creation of three million jobs. Increased use of robots usually fosters lower unemployment

The jobs that survive the robot revolution are likely to be the ones requiring creativity, empathy and human connections, negotiation and persuasion- and repair and maintenance of robots. We are certain to see more job openings in science, technology, engineering, and math fields. As robots handle more of our repetitive tasks, we will have more opportunity for easier and more interesting work.

Welcome the robots. More than likely, they are your friends.

(To benefit from automation, you need current information. For this, a reliable internet connection is necessary. Talk to us. We can help.)

Share on Social Media:

FORMAL SCHOOLING VS YOU: PART III

What do you hope to get out of formal schooling? More specifically, what do you hope to get from higher education? Is it a lucrative career? A well-rounded personality? The ability to converse with almost anyone about almost any topic? Acceptance in certain social circles? The chance to meet people who can help you succeed?

Whatever your goal, it’s entirely possible to meet it without spending years in a stifling classroom environment, and without piling up tens of thousands of dollars in debt.

High school students are told, over and over again, that a good career is impossible without a college degree. But is this true? Most often, the time spent in pursuit of a degree would otherwise be spent in the labor force, in travel, or in business ventures. Whether they succeed or not, these pursuits are learning opportunities. After a few years of them, assuming we’ve put forth reasonable effort, we are likely to have contacts, referrals, and experience producing products or services of real value. These traits are often valued more highly by employers than college degrees.

Even if we don’t learn much through formal schooling, the degree is a necessary credential, isn’t it? Don’t the best jobs require degrees? This may have been true for many years. When college graduates were rare, degrees may have indicated unusual merit. With millions of baccalaureates flooding the job market every year, though, the degree means less than it once did. The degree is a much less reliable signal of experience, knowledge, or effort than it once was. Many employers, therefore, are looking for other measures of career fitness. A LinkedIn profile and five minute Google search may reveals more about an applicant’s communication ability, work ethic, and commitment to completion of tasks than a degree will.

At the very least, some would say, the formal schooling environment offers effective networking. This is questionable, though. College students typically spend most of their time with people of roughly the same age. Most of the student’s acquaintances are studying the same subjects, and are doing the same things with their time. Almost nobody the student knows is active in business or the labor market. His social network is too narrow to benefit him very much. If he wants to cultivate contacts that will help him find employment, the college environment is the wrong place for it.

Whatever you hope to get from higher education, the formal school setting might not be the best place for it. If you look for them, you are likely to find other ways of meeting your goals. And these other ways are likely to cost much less- in time and money.

(To get the most out of informal learning, you need a reliable broadband connection. Talk to us. We can help.)

Share on Social Media:

SOCIAL MEDIA AND PRIVACY

If you spend much time online, your privacy is unsafe unless you take steps to protect it. What may be even more dismaying is that the rules governing online privacy are inconsistent. They inhibit only a few of the worst potential violators, leaving others free to vacuum up as much of your personal data as their technologies allow.

Last week, the Federal Communications Commission unwittingly underscored this inconsistency. Tom Wheeler, the FCC Chairman, announced a proposal for imposing strict new privacy rules on internet service providers.  From the consumer’s point of view, the proposal was a huge step forward, as ISPs would have to protect personal information, report breaches, and obtain consumer consent for personal data collection. Consumers would have to ‘opt in’ to allow collection of personal information. The new regulations would make it more difficult to use consumer data for targeted advertising.

Unfortunately, the new rules would exempt Facebook, Twitter, Google, and other browsers and social media. The American Civil Liberties Union expressed disappointment with the proposed new rules, and other consumer groups gave them only qualified endorsement. Some ISPs panned the proposal. AT&T, for example, called it discriminatory. The telecom giant objected that broadband providers would be held to stricter standards than other online companies.

Since the FCC won’t do much to protect you, you have to protect yourself when using social media. Consider using an ad blocker. Carefully review the privacy policy of any social website you visit.

You need to be vigilant to guard your privacy on any social medium. Some websites change privacy settings frequently, without notifying users. Facebook is especially notorious for this.

If you find that your privacy settings have been changed without your consent, change them back. Then send a complaint to the site administrators. This will not guarantee that the site’s policies will change, but it may help. If enough users complain, administrators may finally pay attention.

Above all else, remain alert. The best safeguard for your privacy is your own common sense.

(For the internet service that meets your needs, talk to us.)

Share on Social Media:

MEMORY BY GOOGLE

Have you ever forgotten a business appointment? Have you ever forgotten your spouse’s birthday? Have you ever forgotten your most important point while briefing your boss about a critical project?

Memory often fails us when we need it most. Within a few years, though, you might not need it. Machines will remember what you need to know.

Last month, IBM patented an algorithm it calls an “automatic Google for the mind”. It could track your behavior and speech, analyze your intentions, and, discerning when you seem to have lost your way, offer suggestions to prod your memory. Dr. James Kozlowski, a computational neuroscientist for IBM Research, is the lead researcher for the automated memory project. Kozlowski says he helped develop his company’s new ‘cognitive digital assistant’ for people with severe memory impairment, but it could help all of us with research, brainstorming, recovering lapsed memories, and forming creative connections.

IBM’s new cognitive tool tackles the most common cause of memory failure: absence of context. Memory, for most of us, is a web of connections. Remembering a single aspect of an experience, we can call up others. To remember is to find the missing piece in a puzzle. If you can’t find the first clue, you can’t find the second, and you don’t have a mental map for the information you need.

Dr. Kozlowski says IBM has found the solution for our memory failures. His cognitive assistant models our behaviors and memories. It hears our conversations, studies our actions, and draws conclusions about our intentions from our behavior and speech patterns, and our conversations with others. From this data, it can discern when we have trouble with recall. It then will guess what we want to know, suggesting names and biographical data within milliseconds. By studying our individual quirks, it will learn what behavior is normal for us, and when we need help.

Synced with your phone, the automated cognitive assistant would search its database of phone numbers to find out who’s calling you. Before you answer, the assistant will display the caller’s name, highlights of your recent conversations, and important events in the caller’s life. At a business meeting, your digital assistant will, on hearing certain words, recall related points mentioned in past meetings, and your research on the subject. It will display them on your mobile device, or ‘speak’ them into an earpiece.

It’s likely to be several years before IBM’s automated cognitive assistant is in common use. A few bugs stand in the way of commercialization, but it’s still an impressive achievement.

Share on Social Media:

There is a specter haunting advertisers, marketers, and manufacturers all across the fruited  plain. It is the specter of the sharing economy.

Many of the young are turning their backs on the very concept of owning much of anything apart from toiletries and clothing. They see no need to buy cars, houses, appliances, and machine tools. They can rent or share them as the need arises.

Since much of our consumption, and much of the nation’s gross domestic product, is tied up in the purchase of goods we use only occasionally, our settled economic milieu may be ripe for disruption. We buy and maintain vehicles that we drive only one or two hours a day, so they’ll be ready for us in case we need them. We spend more than half of our lives away from home, and volatile career paths may induce frequent moves, so purchasing real estate may mean stranding resources in illiquid assets. Our appliances and our machine tools are idle most of the time.

These patterns of consumption waste massive amounts of money, material, and energy. Recognizing this, many Americans have embraced the sharing economy. Ride-sharing services such as Uber and Lyft, and home-sharing services such as AirBnB, are only part of the first wave.

In concept, the sharing economy is not new, though the applications cited above are. The first major manifestation of the concept was the public library, its first wares donated by wealthy people whose books would otherwise gather dust. Through the lending library, a book that might otherwise be read by only one or two people in its entire existence, might be read by dozens in a single year.

Building on the success of Uber, Lyft, and AirBnB, entrepreneurs are extending the commercial sharing concept into other fields. The pioneering efforts of Uber and Lyft in the ridesharing market have opened the door for related transport services. RelayRides, a venture backed by Google, enables borrowing of cars from neighbors, by the day or by the hour. Spinlister is a peer-to-peer network for the sharing and renting of bicycles. Boatbound, “the AirBnB of boat rentals”, helps users reserve boats in any major city near a lake or other navigable body of water.

The sharing concept has begun to make inroads into the labor market. TaskRabbit is a mobile market for the hiring of temporary help: tasks ranging from repair to delivery to cleaning to administrative work, even commercial art and writing. “Rabbits” must undergo interviews and background checks before being listed in the system. Zaarly is a peer-to-peer market for home and commercial services. It differs from Taskrabbit in seeking to create “stores’ for particular types of services: iPhone repair or lawn care, for example.

The sharing concept may solve problems that are otherwise insoluble. Where will you leave your dog or cat, for example, if you’ve scheduled an extended trip out of town? A kennel is far from ideal. With DogVacay, you can leave your pet with other pet owners who love animals, and who will give yours a level of care you wouldn’t expect from an overworked and indifferent kennel employee who is responsible for dozens of animals.

One of the most important developing opportunities in the sharing economy is commercial real estate. Long term leases of offices or merchandise display space may not make sense in a fluid economy, and some business owners may just rent them for an hour or  a day, as needed. Some nomadic souls may exchange support pads for tiny homes.

If you’re tired of accumulating goods you seldom use, the sharing economy has much to offer you. And it’s barely getting underway.

(To participate fully in the sharing economy, you need sufficient bandwidth. Talk to us.)

Share on Social Media:

PRIVACY AND THE WEB

The internet has been a huge benefit for most of us. It opens up nearly the entire store of the world’s knowledge to us, and it enables easier and faster communication. It comes at a huge cost, though: loss of privacy.

Your browser tracks your website visits in order to help advertisers identify your interests, so they can more easily identify the pitches you will respond to. Your posts on social media, and tags by others about you on social media, can live on forever, despite your best efforts to suppress them.

Some of the more prominent browser operators and social media sites have attempted to limit damage to personal privacy. There is only so much they can do, though. Parties determined enough to find and publicize the information can usually do so.  When Google attempted to comply with the European Union’s 2014 “Right to Be Forgotten” law, the British Broadcasting Company aggregated and reposted the links to its own stories that the search engine had delisted. The State of California enacted an “eraser button” law for minor children. Under its terms, minors are guaranteed a means to erase their social media posts, but the law can’t keep others from disseminating the information in them.

Any technical fixes may reduce our vulnerability, but they don’t eliminate it. Last June, Google expunged links to revenge porn from its search engine, and deleted the information in them. This makes revenge porn much more difficult, but not impossible. YouTube’s “face-blurring” tool can prevent being tagged by facial recognition apps. This is especially useful for participants in public gatherings, such as political demonstrations. It won’t prevent publication on other social media sites, though. And a person whose face has been blurred can still be identified by clothing, posture, or other distinctive features.

It would be unrealistic to expect to be forgotten on the internet. The best we can hope for is obscurity. Once your information is online, whether posted by you or others, you can’t control who sees it. With some prudence and a few technical fixes, though, you can shield yourself from casual spies. Only the most motivated, persistent, and technically savvy can find what you’re hiding.

To some, this will be cold comfort. For most of us, though, it will be enough. Take a few simple steps to guard your online privacy, and you probably will be fine. Use complex passwords that will be difficult to break. Disable tracking cookies on your browser. Be careful about the websites you visit. Above all else, remember your mother’s advice: avoid doing anything in a public venue you don’t want the whole world to know about. Be especially wary where cameras are likely to be present.

If you have ever been online, your privacy won’t be absolute. With a few basic precautions, though, you should be able to avoid serious problems.

Share on Social Media:

DATA CAPS & YOU

To get the most out of your internet service, it may help you to know what a data cap is, and how to avoid breaking it.

Most internet service providers, to keep their networks from becoming clogged, limit the amount of data any customer can use per month. Most providers offer tiered service, with higher prices for plans with higher data caps. If you use more than your monthly allotment, your data speeds will fall dramatically, and will remain low until your next monthly service period begins. This can be highly frustrating, and can make some internet functions impossible.

We won’t tell you to limit your use of the internet. We won’t tell you not to download music or videos. These, after all, are among the reasons most people want broadband service.

Without such drastic measures, there are a few other steps you can take to get the most out of your data plan.

First, assess your household’s needs. If only one or two people will be connected at a time, and if you use the web strictly for e-mail and light surfing, then you may not need extreme speed or a high data cap. However, if several people may be connected at once, you download video or music frequently, or you conduct business over the internet, you will need more speed and more data.

Second, consider changing your browser. Google Chrome is usually faster than other browsers, but it consumes more data. This is partly because Google- more than any other browser- scans your e-mail and searches for keywords, which it uses for precisely targeted ads. Not only is this annoying, and a possible privacy concern, it consumes data.  If wringing the most out of your data plan is more important than saving a few seconds on a search, then you may want to use a different browser.

Third, close auto-play videos whenever possible. In Chrome, pull up the ‘settings’ bar. and go from there to ‘advanced settings’. From there, go to ‘privacy’, then hit the ‘plug-ins’ tab, and disable Adobe Flash. This won’t block all auto-play videos, but it will block most of them. You’ll seldom have to listen to annoying ads, and you’ll save an enormous amount of data.

With other browsers, the procedure for disabling Adobe Flash is similar, though it may differ in one or two details..

Finally, limit the number of tabs you keep open. If you have multiple tabs open at once, some pictures and videos may be loading in the background.

(To get the most out of your internet service, talk to us. We are your source for HughesNet.)