1010Computers | Computer Repair & IT Support

Google rolls out app time management controls

Google today announced at its I/O developer conference a new suite tools for its new Android P operating system that will help users better manage their screen time, including a more robust do not disturb mode and ways to track your app usage.

The biggest change is introducing a dashboard to Android P that tracks all of your Android usage, labeled under the “digital wellbeing” banner. Users can see how many times they’ve unlocked their phones, how many notifications they get, and how long they’ve spent on apps, for example. Developers can also add in ways to get more information on that app usage. YouTube, for example, will show total watch time across all devices in addition to just Android devices.

Google says it has designed all of this to promote what developers call “meaningful engagement,” trying to reduce the kind of idle screen time that might not necessarily be healthy — like sitting on your phone before you go to bed. Here’s a quick rundown of some of the other big changes:

  • Google’s do not disturb mode is getting additional ways to ignore notifications. Users can turn their phones over in order to automatically engage do not disturb, a gesture that Google is calling “shush.” Google is also reducing visual notifications in addition to texts and calls when do not disturb is activated.
  • Google is also introducing a “wind down” mode that activates before users go to bed. Wind down mode changes the screen color to a grayscale, and lowers the brightness over time. This one is geared toward helping people put their phones down when they’re going to bed.
  • Users can set time limits on their apps. Android P will nudge users when they are approaching that time limit, and once they it it, the app will turn gray on the launcher in order to indicate that they’ve exceeded the screen time they wanted for that app.

The launch had been previously reported by The Washington Post, and arrives at a time when there’s increasing concerns about the negative side of technology and, specifically, its addictive nature. The company already offers tools for parents who want to manage children’s devices, via Family Link – software for controlling access to apps, setting screen time limits, and configuring device bedtimes, among other things. Amazon also offers a robust set of parental controls for its Fire tablets and Apple is expected to launch an expanded set of parental controls for iOS later this year.

Powered by WPeMatico

Android P leverages DeepMind for new Adaptive Battery feature

No surprise here, Android P was the highlight of today’s Google I/O keynote. The new version of the company’s mobile operating system still doesn’t have a name (at least not as of this writing), but the company’s already highlighted a number of key new features, including, notable, Adaptive Battery.

Aimed at taking on basically everyone’s biggest complaints about their handset, the new feature is designed to make more efficient use of on-board hard. Google’s own DeepMind is doing much of the heavy lifting here, relying on user habits to determine what apps they use, when, and delegating power accordingly.

According to the company, the new feature is capable of “anticipating actions,” resulting in 30-percent fewer CPU wakeups. Google has promised more information on the feature in the upcoming developer keynote. Combined with larger on-board batteries and faster charging in recent handsets, the new tech could go a long ways toward changing the way users interact with their devices, shift the all night charge model to quick charging bursts — meaning, for better or worse, you can sleep with your handset nearby without having to worry about keeping it plugged in. 

Powered by WPeMatico

Google Assistant is coming to Google Maps

Google wants to bundle its voice assistant into every device and app. And it’s true that it makes sense to integrate Google Assistant in Google Maps. It’ll be available on iOS and Android this summer.

At Google I/O, director of Google Assistant Lilian Rincon showed a demo of Google Maps with Google Assistant. Let’s say you’re driving and you’re using Google Maps for directions. You can ask Google Assistant to share your ETA without touching your phone.

You can also control the music with your voice for instance. Rincon even played music on YouTube, but without the video element of course. It lets you access YouTube’s extensive music library while driving.

If you’re using a newer car with Android Auto or Apple CarPlay, you’ve already been using voice assistants in your car. But many users rely exclusively on their phone. That’s why it makes sense to integrate Google Assistant in Google Maps directly.

It’s also a great way to promote Google Assistant to users who are not familiar with it yet. That could be an issue as Google Assistant asks for a ton of data when you first set it up. It forces you to share your location history, web history and app activity. Basically you let Google access everything you do with your phone.

Powered by WPeMatico

iOS will soon disable USB connection if left locked for a week

In a move seemingly designed specifically to frustrate law enforcement, Apple is adding a security feature to iOS that totally disables data being sent over USB if the device isn’t unlocked for a period of 7 days. This spoils many methods for exploiting that connection to coax information out of the device without the user’s consent.

The feature, called USB Restricted Mode, was first noticed by Elcomsoft researchers looking through the iOS 11.4 code. It disables USB data (it will still charge) if the phone is left locked for a week, re-enabling it if it’s unlocked normally.

Normally when an iPhone is plugged into another device, whether it’s the owner’s computer or another, there is an interchange of data where the phone and computer figure out if they recognize each other, if they’re authorized to send or back up data, and so on. This connection can be taken advantage of if the computer being connected to is attempting to break into the phone.

USB Restricted Mode is likely a response to the fact that iPhones seized by law enforcement or by malicious actors like thieves essentially will sit and wait patiently for this kind of software exploit to be applied to them. If an officer collects a phone during a case, but there are no known ways to force open the version of iOS it’s running, no problem: just stick it in evidence and wait until some security contractor sells the department a 0-day.

But what if, a week after that phone was taken, it shut down its own Lightning port’s ability to send or receive data or even recognize it’s connected to a computer? That would prevent the law from ever having the opportunity to attempt to break into the device unless they move with a quickness.

On the other hand, had its owner simply left the phone at home while on vacation, they could pick it up, put in their PIN and it’s like nothing ever happened. Like the very best security measures, adversaries will curse its name while users may not even know it exists. Really, this is one of those security features that seems obvious in retrospect and I would not be surprised if other phone makers copy it in short order.

Had this feature been in place a couple of years ago, it would have prevented that entire drama with the FBI. It milked its ongoing inability to access a target phone for months, reportedly concealing its own capabilities all the while, likely to make it a political issue and manipulate lawmakers into compelling Apple to help. That kind of grandstanding doesn’t work so well on a seven-day deadline.

It’s not a perfect solution, of course, but there are no perfect solutions in security. This may simply force all iPhone-related investigations to get high priority in courts, so that existing exploits can be applied legally within the seven-day limit (and, presumably, every few days thereafter). All the same, it should be a powerful barrier against the kind of eventual, potential access through undocumented exploits from third parties that seems to threaten even the latest models and OS versions.

Powered by WPeMatico

Google adds Morse code input to Gboard

Google is adding morse code input to its mobile keyboard. It’ll be available as a beta on Android later today. The company announced that new feature at Google I/O after showing a video of Tania Finlayson.

Finlayson has been having a hard time communicating with other people due to her condition. She found a great way to write sentences and talk with people using Morse code.

Her husband developed a custom device that analyzes her head movements and transcodes them into Morse code. When she triggers the left button, it adds a short signal, while the right button triggers a long signal. Her device then converts the text into speech.

Google’s implementation will replace the keyboard with two areas for short and long signals. There are multiple word suggestions above the keyboard just like on the normal keyboard. The company has also created a Morse poster so that you can learn Morse code more easily.

As with all accessibility features, the more input methods the better. Everything that makes technology more accessible is a good thing.

Of course, Google used its gigantic I/O conference to introduce this feature to make the company look good too. But it’s a fine trade-off, a win-win for both Google and users who can’t use a traditional keyboard.

Correction: A previous version of this article said that Morse code is available on iOS and Android. The beta is only available on Android.

Powered by WPeMatico

Google announces a new generation for its TPU machine learning hardware

As the war for creating customized AI hardware heats up, Google announced at Google I/O 2018 that is rolling out out its third generation of silicon, the Tensor Processor Unit 3.0.

Google CEO Sundar Pichai said the new TPU is eight times more powerful than last year per pod, with up to 100 petaflops in performance. Google joins pretty much every other major company in looking to create custom silicon in order to handle its machine operations. And while multiple frameworks for developing machine learning tools have emerged, including PyTorch and Caffe2, this one is optimized for Google’s TensorFlow. Google is looking to make Google Cloud an omnipresent platform at the scale of Amazon, and offering better machine learning tools is quickly becoming table stakes. 

Amazon and Facebook are both working on their own kind of custom silicon. Facebook’s hardware is optimized for its Caffe2 framework, which is designed to handle the massive information graphs it has on its users. You can think about it as taking everything Facebook knows about you — your birthday, your friend graph, and everything that goes into the news feed algorithm — fed into a complex machine learning framework that works best for its own operations. That, in the end, may have ended up requiring a customized approach to hardware. We know less about Amazon’s goals here, but it also wants to own the cloud infrastructure ecosystem with AWS. 

All this has also spun up an increasingly large and well-funded startup ecosystem looking to create a customized piece of hardware targeted toward machine learning. There are startups like Cerebras Systems, SambaNova Systems, and Mythic, with a half dozen or so beyond that as well (not even including the activity in China). Each is looking to exploit a similar niche, which is find a way to outmaneuver Nvidia on price or performance for machine learning tasks. Most of those startups have raised more than $30 million. 

Google unveiled its second-generation TPU processor at I/O last year, so it wasn’t a huge surprise that we’d see another one this year. We’d heard from sources for weeks that it was coming, and that the company is already hard at work figuring out what comes next. Google at the time touted performance, though the point of all these tools is to make it a little easier and more palatable in the first place. 

Google also said this is the first time the company has had to include liquid cooling in its data centers, CEO Sundar Pichai said. Heat dissipation is increasingly a difficult problem for companies looking to create customized hardware for machine learning.

There are a lot of questions around building custom silicon, however. It may be that developers don’t need a super-efficient piece of silicon when an Nvidia card that’s a few years old can do the trick. But data sets are getting increasingly larger, and having the biggest and best data set is what creates a defensibility for any company these days. Just the prospect of making it easier and cheaper as companies scale may be enough to get them to adopt something like GCP. 

Intel, too, is looking to get in here with its own products. Intel has been beating the drum on FPGA as well, which is designed to be more modular and flexible as the needs for machine learning change over time. But again, the knock there is price and difficulty, as programming for FPGA can be a hard problem in which not many engineers have expertise. Microsoft is also betting on FPGA, and unveiled what it’s calling Brainwave just yesterday at its BUILD conference for its Azure cloud platform — which is increasingly a significant portion of its future potential.

Google more or less seems to want to own the entire stack of how we operate on the internet. It starts at the TPU, with TensorFlow layered on top of that. If it manages to succeed there, it gets more data, makes its tools and services faster and faster, and eventually reaches a point where its AI tools are too far ahead and locks developers and users into its ecosystem. Google is at its heart an advertising business, but it’s gradually expanding into new business segments that all require robust data sets and operations to learn human behavior. 

Now the challenge will be having the best pitch for developers to not only get them into GCP and other services, but also keep them locked into TensorFlow. But as Facebook increasingly looks to challenge that with alternate frameworks like PyTorch, there may be more difficulty than originally thought. Facebook unveiled a new version of PyTorch at its main annual conference, F8, just last month. We’ll have to see if Google is able to respond adequately to stay ahead, and that starts with a new generation of hardware.

 

 

Powered by WPeMatico

Watch Google I/O keynote live right here

How did you find Microsoft Build yesterday? We don’t really have time for your answer because Google I/O is already here! Google is kicking off its annual developer conference today. As usual, there will be a consumer keynote with major new products in the morning, and a developer-centric keynote in the afternoon.

The conference starts at 10 AM Pacific Time (1 PM on the East Cost, 6 PM in London, 7 PM in Paris) and you can watch the live stream right here on this page. The developer keynote will be at 12:45 PM Pacific Time.

Rumor has it that Google is about to share more details about Android P, the next major release of its Android platform. But you can also expect some Google Assistant and Google Home news, some virtual reality news and maybe even some Wear OS news. We have a team on the ground ready to cover the event, so don’t forget to read TechCrunch to get our take on today’s news.

Powered by WPeMatico

Now you can make reservations and buy movie tickets on Instagram

Instagram is unveiling new features for businesses that want to use their profiles to message with customers and even facilitate transactions.

Even if you’re not a business on Instagram, you might still notice the addition of action buttons, which will allow you to make a reservation, buy a ticket, start an order or make a booking using third party services, all from an Instagram business profile.

The initial integrations include (deep breath) Acuity, Atom Tickets, Booksy, ChowNow, Eatstreet, Eventbrite, Fandango, GrubHub, MyTime, OpenTable, Reserve, Restorando, Resy, SevenRooms, StyleSeat, Tock and Yelp Reservations, with plans to add Appointy, Genbook, LaFourchette, Mindbody, Schedulicity, SetMore, Shedul and Vagaro soon.

It looks like the buttons essentially open up a browser window or widget for users to perform their chosen actions, so it’s not quite native functionality in the Instagram app. Still, it means that these interactions are now just a tap away. And an Instagram spokesperson told us that in the case of Atom Tickets, the actions do take advantage of Instagram’s native payments.

The company says that more than 200 million daily users visit an Instagram business profile every day, while more than 150 million have an Instagram Direct conversation with a business in a month.

“As more people continue to interact with businesses on Instagram and take action when inspiration strikes, we’re making it easier to turn that discovery into action,” Instagram said in a blog post announcing the new feature.

Instagram Direct quick replies

In addition, the company is rethinking how businesses handle their Instagram Direct messages. Customer messages now show up in the main inbox, rather the pending folder, with the ability to star conversations that the business wants to come back to. Instagram also says it will start testing quick replies, so that a business can just select prewritten responses to standard questions, rather than typing the same answer over and over again.

Powered by WPeMatico

BotChain wants to put bot-to-bot communication on the blockchain

Increasingly we are going to be having bots conducting business on a company’s behalf. As that happens, it is going to require a trust mechanism to ensure that bot-to-bot communication is legitimate. BotChain, a new startup out of Boston wants to be the blockchain for registering bots.

The new blockchain, which is built on Ethereum, is designed to register and identify bots and provide a way for companies to collaborate between bots with auditing capabilities built in. BotChain has the potential to become a standard way of sharing data between bots in a trusted way.

The idea is to have an official and sanctioned place for companies to register their bots securely. As the organization describes it, “BotChain offers bot developers, enterprises, software companies, and system integrators the critical systems, standards, and means to validate, certify, and manage the millions of bots and billions of transactions powered by AI.

Photo: allanswart

The company was created by the team at Talla, a bot startup in Cambridge, but the goal is to open this up to much larger community of partners and expand. In fact, early partners include Gupshup, a platform for developers and Howdy.ai, B2B enterprise bot developers along with Polly, CareerLark, Disco (formerly Growbot), Zoom.ai, and Botkeeper.

BotChain is the brainchild of Rob May, who is CEO at Talla. He was formerly co-founder and CEO at Backupify, which was sold to Datto in 2014. He recognized that as bot usage increases, there needed to be a system in place to help companies using bots to exchange information, and eventually even digital currencies to complete transactions in a fully digital context.

May believes that blockchain is the best solution to build this trust mechanism because of the ledger’s nature as an immutable and irrefutable record. If the entities on the blockchain agree to work with one another, and the other members allow it, there should be an element of confidence inherent in that.

He points to other advantages such as being decentralized so that no single company can control the data on the blockchain, and of course nobody can erase a record once it’s been written to the chain. It also provides a way for bots to identify one another in an official way and for participating companies to track transactions between bots.

Talla opened this up to a community of users because it wants BotChain to be a standard way for bots to exchange information. Whether that happens or not remains to be seen, but these types of projects could be important building blocks as companies look for ways to conduct business confidently, even when there are no humans involved.

BotChain has raised $5 million USD in a private token sale to institutional investors such as Galaxy Digital, Pillar, Glasswing and Avalon, according to the company.

In addition, they will be conducting another token pre-sale starting this Friday to raise additional funds from community stakeholders. “This token sale is a way to give [our community] access. Purchasing these tokens allows users to start registering their assets and create chains of immutable records of what their machines have done,” May explained. He said the company expects to sell about $20 million worth of tokens this year.

You can learn more about Botchain from this video:

Powered by WPeMatico

Microsoft and Red Hat now offer a jointly managed OpenShift service on Azure

Microsoft and Red Hat are deepening their existing alliance around cloud computing. The two companies will now offer a managed version of OpenShift, Red Hat’s container application platform, on Microsoft Azure. This service will be jointly developed and managed by Microsoft and Red Hat and will be integrated into the overall Azure experience.

Red Hat OpenShift on Azure is meant to make it easier for enterprises to create hybrid container solutions that can span their on-premise networks and the cloud. That’ll give these companies the flexibility to move workloads around as needed and will give those companies that have bet on OpenShift the option to move their workloads close to the rest of Azure’s managed services like Cosmos DB or Microsoft’s suite of machine learning tools.

Microsoft’s Brendan Burns, one of the co-creators of Kubernetes, told me that the companies decided that this shouldn’t just be a service that runs on top of Azure and consumes the Azure APIs. Instead, the companies made the decision to build a native integration of OpenShift into Azure — and specifically the Azure Portal. “This is a first in class fully enterprise-supported application platform for containers,” he said. “This is going to be an experience where enterprises can have all the experience and support they expect.”

Red Hat VP for business development and architecture Mike Ferris echoed this and added that his company is seeing a lot of demand for managed services around containers.

Powered by WPeMatico