It’s looking like the digital divide may have less gray hair than it used to — but it’s still a big issue for U.S. seniors.
According to new research from the Pew Internet and American Life Project, for the first time more than half (53%) of Americans age 65 or older now use the Internet or e-mail.
Also, most Internet-using seniors have made a daily habit of going online; Pew noted that 70% of them access the Internet on a typical day.
» via CNN
Conscious machines are everywhere in popular culture – countless books, films and TV shows are based in a future where humans have robot or computer companions. A world with intelligent machines is something nearly everyone can readily picture and, while we may not know how to create these machines at present, it is generally assumed that we will recognize them if they come around. If a conscious machine were created, the argument goes, it could simply tell us itself. Or if it were sneaky, and chose not to, then we would quickly realize it was in our midst because of the way it acted. However, recent research does nothing to suggest these assumptions are valid. What’s more, we must ask if it will ever be possible to identify conscious machines. If we cannot tell the difference what does that say about our ideas of consciousness itself? (via Hidden Smiles and the Desire of a Conscious Machine)
Whenever a request reaches Twitter, we decide if the request should be sampled. We attach a few lightweight trace identifiers and pass them along to all the services used in that request. By only sampling a portion of all the requests we reduce the overhead of tracing, allowing us to always have it enabled in production.
The Zipkin collector receives the data via Scribe and stores it in Cassandra along with a few indexes. The indexes are used by the Zipkin query daemon to find interesting traces to display in the web UI.
There a many APM solutions out there, but sometimes the overhead or the lack of support for specific components may lead to the need of custom solutions like this one. While in a different field, this is just another example of why products and services should be designed with openness and integration in mind.
This year’s study also includes special reports on the impact of mobile technology and social media on news. Those reports, which feature new survey data, finds that rather than replacing media consumption on digital devices, people who go mobile are getting news on all their devices. They also appear to be getting it more often, and reading for longer periods of time. For example, about a third, 34%, of desktop/laptop news consumers now also get news on a smartphone. About a quarter, 27%, of smartphone news consumers also get news on a tablet. These digital news omnivores are also a large percentage of the smart phone/tablet population. And most of those individuals (78%) still get news on the desktop or laptop as well.
A PEJ survey of more than 3,000 adults also finds that the reputation or brand of a news organization, a very traditional idea, is the most important factor in determining where consumers go for news, and that is even truer on mobile devices than on laptops or desktops. Indeed, despite the explosion in social media use through the likes of Facebook and Twitter, recommendations from friends are not a major factor yet in steering news consumption.
In the post-PC present, we have news up the ying, exploding out of all our devices like volcanic magma. But the Pew verbiage about who profits misses an essential point — typified by the ‘news consumption’ viewpoint they still espouse — we have moved away from audience-centered media to experience-centered media. The experience is what matters, so that’s why the value shifts to the tools we use to use information shaped by the news form factor. Using information is not equivalent to ‘consuming media’, but the media companies don’t get it.
The new media folks desperately want to write for some hypothetical audience, one they can find the center of. They are like border collies, wired to herd sheep and frantic if they can’t find any.
Read the full report.
Smarter Marketing: Connect with your customer (by IBM)
Predictive analytics brings science to the art of customer engagement, helping create a seamless experience that can give customers what they want, when they want it. Connect with your customers.
Gladwell throws down.“These are not moral leaders, if they were moral leaders they wouldn’t be great businessmen.”
“When Kirk asks the computer some complex question and it answers him intelligently, drawing from a bunch of different sources, that’s the vision,” Google spokesperson Jason Freidenfelds told me last week. He then introduced me to John Giannandrea, Director of Engineering at Google, and the man tasked with making the Star Trek computer come to life. Giannandrea and his team recently launched Knowledge Graph, a informational meta-sidebar you may’ve noticed in Google Search.
Despite the massive amounts of computing power dedicated by search engine companies to crawling and indexing trillions of documents on the Internet, search engines still can’t do what nearly any human can: tell the difference between a star, a 1970s TV show, and a Turkish alternative rock band. That’s because Web indexing has been based on the bare words found on webpages, not on what they mean. Since the beginning, search engines have essentially matched strings of text, says Shashi Thakur, a technical lead for Google’s search team. “When you try to match strings, you don’t get a sense of what those strings mean. We should have a connection to real-world knowledge of things and their properties and connections to other things.” Making those connections is the reason for recent major changes within the search engines at Microsoft and Google. Microsoft’s Satori and Google’s Knowledge Graph both extract data from the unstructured information on webpages to create a structured database of the “nouns” of the Internet: people, places, things, and the relationships between them all. The changes aren’t cosmetic; for Google, for example, this was the company’s biggest retooling to search since rolling out “universal search” in 2007. (via How Google and Microsoft taught search to “understand” the Web | Ars Technica)
Imagine a world without man-made climate change, energy crunches or reliance on foreign oil. It may sound like a dream world, but University of Tennessee, Knoxville, engineers have made a giant step toward making this scenario a reality.
UT researchers have successfully developed a key technology in developing an experimental reactor that can demonstrate the feasibility of fusion energy for the power grid. Nuclear fusion promises to supply more energy than the nuclear fission used today but with far fewer risks.
Mechanical, aerospace and biomedical engineering professors David Irick, Madhu Madhukar and Masood Parang are engaged in a project involving the United States, five other nations, and the European Union, known as ITER. UT researchers completed a critical step this week for the project by successfully testing their technology this week that will insulate and stabilize the central solenoid—the reactor’s backbone.
Art installation merges gardening and technology, creating Arduino-powered frames with sensors for plants to be monitored, and interacted with via smartphone app. From the project’s Kickstarter page:
We’re creating a space where a community who loves architecture, technology and plants can meet. Our mission is to integrate these disciplines into a new paradigm that changes the way we live and interact with nature. We believe that interacting with plants will improve our lives.
Plant-in City taps into the natural systems that foster plant life to give the plants themselves a voice. This revolutionary planter system contains built-in sensors that are activated by sun exposure, changes in soil moisture, humidity, temperature, and other natural cycles. Once activated, these sensors translate the environmental data into sounds or visuals, creating an imaginary vibrant wilderness.
More about the project can be found here