1010Computers | Computer Repair & IT Support

Google launches a standalone version of Drive for businesses that don’t want the full G Suite

If you are a business and want to use Google Drive, then your only option until now was to buy a full G Suite subscription, even if you don’t want or need access to the rest of the company’s productivity tools. Starting today, though, these businesses will be able to buy a subscription to a standalone version of Google Drive, too.

Google says that a standalone version of Drive has been at the top of the list of requests from prospective customers, so it’s now giving this option to them in the form of this new service (though to be honest, I’m not sure how much demand there really is for this product). Standalone Google Drive will come with all the usual online storage and sharing features as the G Suite version.

Pricing will be based on usage. Google will charge $8 per month per active user and $0.04 per GB stored in a company’s Drive.

Google’s idea here is surely to convert those standalone Drive users to full G Suite users over time, but it’s also an acknowledgement on Google’s part that not every business is ready to move away from legacy email tools and desktop-based productivity applications like Word and Excel just yet (and that its online productivity suite may not be right for all of those businesses, too).

Drive, by the way, is going to hit a billion users this week, Google keeps saying. I guess I appreciate that they don’t want to jump the gun and are actually waiting for that to happen instead of just announcing it now when it’s convenient. Once it does, though, it’ll become the company’s eighth product with more than a billion users.

Powered by WPeMatico

Google takes on Yubico and builds its own hardware security keys

Google today announced it is launching its own hardware security keys for two-factor authentication. These so-called Titan Security Keys will go up against similar keys from companies like Yubico, which Google has long championed as the de facto standard for hardware-based two-factor authentication for Gmail and other services.

The FIDO-compatible Titan keys will come in two versions. One with Bluetooth support for mobile devices and one that plugs directly into your computer’s USB port. In terms of looks and functionality, those keys look quite a lot like the existing keys from Yubico, though our understanding is that these are Google’s own designs.

Unsurprisingly, the folks over at Yubico got wind of today’s announcement ahead of time and have already posted a reaction to today’s news (and the company is exhibiting at Google Cloud Next, too, which may be a bit awkward after today’s announcement).

“Yubico strongly believes there are security and privacy benefits for our customers, by manufacturing and programming our products in USA and Sweden,” Yubico founder and CEO Stina Ehrensvard writes, and goes on to throw a bit of shade on Google’s decision to support Bluetooth. “Google’s offering includes a Bluetooth (BLE) capable key. While Yubico previously initiated development of a BLE security key, and contributed to the BLE U2F standards work, we decided not to launch the product as it does not meet our standards for security, usability and durability. BLE does not provide the security assurance levels of NFC and USB, and requires batteries and pairing that offer a poor user experience.”

It’s unclear who is manufacturing the Titan keys for Google (the company spokesperson didn’t know when asked during the press conference), but the company says that it developed its own firmware for the keys. And while Google is obviously using the same Titan brand it uses for the custom chips that protect the servers that make up its cloud, it’s also unclear if there is any relation between those.

No word on pricing yet, but the keys are now available to Google Cloud customers and will be available for purchase for anyone in the Google Store, soon. Comparable keys tend to sell for around $20 to $25.

Powered by WPeMatico

Google brings its search technology to the enterprise

One of Google’s first hardware products was its search appliance, a custom-built server that allowed businesses to bring Google’s search tools to the data behind their firewalls. That appliance is no more, but Google today announced the spiritual successor to it with an update to Cloud Search. Until today, Cloud Search only indexed G Suite data. Now, it can pull in data from a variety of third-party services that can run on-premise or in the cloud, making the tool far more useful for large businesses that want to make all of their data searchable by their employees.

“We are essentially taking all of Google expertise in search and are applying it to your enterprise content,” Google said.

One of the launch customers for this new service is Whirlpool, which built its own search portal and indexed more than 12 million documents from more than a dozen services using this new service.

“This is about giving employees access to all the information from across the enterprise, even if it’s traditionally siloed data, whether that’s in a database or a legacy productivity tool and make all of that available in a single index,” Google explained.

To enable this functionality, Google is making a number of software adapters available that will bridge the gap between these third-party services and Cloud Search. Over time, Google wants to add support for more services and bring this cloud-based technology on par with what its search appliance was once capable of.

The service is now rolling out to a select number of users. Over time, it’ll become available to both G Suite users and as a standalone version.

Powered by WPeMatico

Snark AI looks to help companies get on-demand access to idle GPUs

Riding on a wave of an explosion in the use of machine learning to power, well, just about everything is the emergence of GPUs as one of the go-to methods to handle all the processing for those operations.

But getting access to those GPUs — whether using the cards themselves or possibly through something like AWS — might still be too difficult or too expensive for some companies or research teams. So Davit Buniatyan and his co-founders decided to start Snark AI, which helps companies rent GPUs that aren’t in use across a distributed network of companies that just have them sitting there, rather than through a service like Amazon. While the larger cloud providers offer similar access to GPUs, Buniatyan’s hope is that it’ll be attractive enough to companies and developers to tap a different network if they can lower that barrier to entry. The company is launching out of Y Combinator’s Summer 2018 class.

“We bet on that there will always be a gap between mining and AWS or Google Cloud prices,” Buniatyan said. “If the mining will be [more profitable than the cost of running a GPU], anyone can get into AWS and do mining and be profitable. We’re building a distributed cloud computing platform for clients that can easily access the resources there but are not used.”

The startup works with companies with a lot of spare GPUs that aren’t in use, such as gaming cloud companies or crypto mining companies. Teams that need GPUs for training their machine learning models get access to the raw hardware, while teams that just need those GPUs to handle inference get access to them through a set of APIs. There’s a distinction between the two because they are two sides to machine learning — the former building the model that the latter uses to execute some task, like image or speech recognition. When the GPUs are idle, they run mining to pay the hardware providers, and Snark AI also offers the capability to both mine and run deep learning inference on a piece of hardware simultaneously, Buniatyan said.

Snark AI matches the proper amount of GPU power to whatever a team needs, and then deploys it across a network of distributed idle cards that companies have in various data centers. It’s one way to potentially reduce the cost of that GPU over time, which may be a substantial investment initially but get a return over time while it isn’t in use. If that’s the case, it may also encourage more companies to sign up with a network like this — Snark AI or otherwise — and deploy similar cards.

There’s also an emerging trend of specialized chips that focus on machine learning or inference, which look to reduce the cost, power consumption or space requirements of machine learning tasks. That ecosystem of startups, like Cerebras Systems, Mythic, Graphcore or any of the other well-funded startups, all potentially have a shot at unseating GPUs for machine learning tasks. There’s also the emergence of ASICs, customized chips that are better suited to tasks like crypto mining, which could fracture an ecosystem like this — especially if the larger cloud providers decide to build or deploy something similar (such as Google’s TPU). But this also means that there’s room to potentially create some new interface layer that can snap up all the leftovers for tasks that companies might need, but don’t necessarily need bleeding-edge technology like that from those startups.

There’s always going to be the same argument that was made for Dropbox prior to its significant focus on enterprises and collaboration: the price falls dramatically as it becomes more commoditized. That might be especially true for companies like Amazon and Google, which have already run that playbook, and could leverage their dominance in cloud computing to put a significant amount of pressure on a third-party network like Snark AI. Google also has the ability to build proprietary hardware like the TPU for specialized operations. But Buniatyan said the company’s focus on being able to juggle inference and mining, in addition to keeping that cost low for idle GPUs of companies that are just looking to deploy, should keep it viable, even amid a changing ecosystem that’s focusing on machine learning.

Powered by WPeMatico

Google Cloud introduces shielded virtual machines for additional security

While we might like to think all of our applications are equal in our eyes, in reality some are more important than others and require an additional level of security. To meet those requirements, Google introduced shielded virtual machines at Google Next today.

As Google describes it, “Shielded VMs leverage advanced platform security capabilities to help ensure your VMs have not been tampered with. With Shielded VMs, you can monitor and react to any changes in the VM baseline as well as its current runtime state.”

These specialized VMs run on GCP and come with a set of partner security controls to defend against things like rootkits and bootkits, according to Google. There are a whole bunch of things that happen even before an application launches inside a VM, and each step in that process is vulnerable to attack.

That’s because as the machine starts up, before you even get to your security application, it launches the firmware, the boot sequence, the kernel, then the operating system — and then and only then, does your security application launch.

That time between startup and the security application launching could leave you vulnerable to certain exploits that take advantage of those openings. The shielded VMs strip out as much of that process as possible to reduce the risk.

“What we’re doing here is we are stripping out any of the binary that doesn’t absolutely have to be there. We’re ensuring that every binary that is there is signed, that it’s signed by the right party, and that they load in the proper sequence,” a Google spokesperson explained. All of these steps should reduce overall risk.

Shielded VMs are available in Beta now

Powered by WPeMatico

Google’s big redesign for Gmail is now generally available for enterprise G Suite customers

Google is running its playbook again of releasing big new products (or redesigns) to its average users and then moving what works over to its enterprise services, G Suite, today by making the Gmail redesign generally available to G Suite customers.

Gmail’s redesign launched for consumers in April earlier this year, including new features like self-destructing messages, email snoozing and other new features in addition to a little bit of a new look for the service that has more than 1 billion users. All those services are useful for consumers, but they might actually have more palatable use cases within larger companies that have to have constant communication with anywhere from a few to thousands of employees. Email hell is a common complaint for, well, basically every single user on Facebook, Twitter, LinkedIn or anywhere else people can speak publicly to any kind of network, and any attempts to tackle that — that work, at least — could have pretty substantial ramifications.

Google is directly competing with other enterprise mail services, especially as it looks to make G Suite a go-to set of enterprise tools for larger companies. It’s a nice, consistent business that can grow methodically, which is a kind of revenue stream that Wall Street loves and can cover the potential trip-ups in other divisions. Google has also made a big push in its cloud efforts, especially on the server front with its competitors for Microsoft and Azure — which doesn’t make it that surprising that Google is announcing this at what is effectively its cloud conference, Google Cloud Next 2018 in San Francisco.

The new Gmail uses machine learning to find threat indicators across a huge bucket of messages to tackle some of the lowest-hanging fruit, like potential phishing attacks, that could compromise a company’s security and potentially cost millions of dollars. Google says those tools protect users from almost 10 million spam and malicious emails every minute, and the new update also gives G Suite users access to those security features, as well as offline access and the redesigned security warnings that Google included in its consumer-focused redesign.

Whether companies will adopt this redesign — or at least what rate they will — remains to be seen, as even small tweaks to any kind of software that has a massive amount of engagement can potentially interrupt the workflow of users. We’ve seen that happen before with Facebook users losing it over small changes to News Feed, and while enterprise Gmail is definitely a different category, Google has to take care to ensure that those small changes don’t interrupt the everyday use cases for enterprise users. If companies are going to pay Google for something like this, they have to get it right.

Powered by WPeMatico

Google is rolling out a version of Google Voice for enterprise G Suite customers

Google today said it will be rolling out an enterprise version of its Google Voice service for G Suite users, potentially tapping a new demand source for Google that could help attract a whole host of new users.

Google voice has been a long-enjoyed service for everyday consumers, and offers a lot of benefits beyond just having a normal phone number. The enterprise version of Google Voice appears to give companies a way to offer those kinds of tools, including AI-powered parts of it like voicemail transcription, that employees may be already using and potentially skirting the guidelines of a company. Administrators can provision and port phone numbers, get detailed reports and set up call routing functionality. They can also deploy phone numbers to departments or employees, giving them a sort of universal number that isn’t tied to a device — and making it easier to get in touch with someone where necessary.

All of this is an effort to spread adoption of G Suite among larger enterprises as it offers a nice consistent business for Google. While its advertising business continues to grow, the company is investing in cloud products as another revenue stream. That division offers a lot of overhead while Google figures out where the actual total market capture of its advertising is and starts to work on other projects like its hardware, Google Home, and others.

While Google didn’t explicitly talk about it ahead of the conference today, there’s another potential opportunity for something like this: call centers. An enterprise version of Google Voice could give companies a way to provision out certain phone numbers to employees to handle customer service requests and get a lot of information about those calls. Google yesterday announced that it was rolling out a more robust set of call center tools that lean on its expertise in machine learning and artificial intelligence, and getting control of the actual numbers that those calls take in is one part of that equation.

There’s also a spam filtering feature, which will probably be useful in handling waves of robo-calls for various purposes. It’s another product that Google is porting over to its enterprise customers with a bit better controls for CTOs and CIOs after years of understanding how normal consumers are using it and having an opportunity to rigorously test parts of the product. That time also gives Google an opportunity to thoroughly research the gaps in the product that enterprise customers might need in order to sell them on the product.

Google Voice enterprise is going to be available as an early adopter product.

Powered by WPeMatico

Grubhub acquires payments and loyalty company LevelUp for $390M

Grubhub announced this morning that it’s agreed to acquire LevelUp for $390 million cash.

Founder and CEO Matt Maloney told me that while previous Grubhub acquisitions like Eat24 were designed to give the company’s delivery business more scale, “This is kind of a different acquisition. It’s a product and strategic positioning acquisition.”

LevelUp is based in Boston, offering a platform to manage digital ordering, payments and loyalty, with customers like KFC, Taco Bell Pret a Manger, Potbelly and Bareburger. Maloney said that buying the company allows Grubhub to deepen its integration with restaurants’ point-of-sale systems. That, in turn, will allow them to handle more deliveries.

At the same time, Maloney said LevelUp can help Grubhub build a restaurant platform that goes beyond delivery, for example by managing their customer interactions across mobile and the web.

“We want to help restaurants actively engage with their diners,” Maloney said. “This is a huge step in that direction.”

Once the regulatory waiting period is over, the entire LevelUp team will be joining Grubhub, with founder and CEO Seth Priebatsch reporting to Maloney — who said that in the short term, he plans to change very little, aside from the POS integrations. Even in the long term, he suggested that LevelUp could continue to operate as its own brand within the larger Grubhub platform.

“They’re doing something really well and we don’t want to screw that up,” he said. “We want to make as little change as possible, until we all understand how we’re better working together.”

The LevelUp platform was launched in 2011, and the company has raised around $108 million in total funding, according to Crunchbase. Investors include Highland Capital, GV, Balderton Capital, Deutsche Telecom Strategic Investments, Continental Advisors, Transmedia Capital and U.S. Boston Capital.

“For the last seven years, we have worked to provide restaurant clients with a complete solution to engage customers, and this agreement is the biggest and most exciting step in achieving that mission,” Priebatsch said in a statement provided by Grubhub. “After close, the entire team will remain in Boston and our office will become Grubhub’s newest center of technology excellence.”

The announcement came as part of Grubhub’s second quarter earnings release, which saw the company grow active diners by 70 percent year-over-year, to 15.6 million, while revenue increased 51 percent, to $240 million.

Powered by WPeMatico

Chat app Line gets serious about gaming with its latest acquisition

Line, the company best-known for its popular Asian messaging app, is doubling down on games after it acquired a controlling stake in Korean studio NextFloor for an undisclosed amount.

NextFloor, which has produced titles like Dragon Flight and Destiny Child, will be merged with Line’s games division to form the Line Games subsidiary. Dragon Flight has racked up 14 million users since its 2012 launch — it clocked $1 million in daily revenue at peak. Destiny Child, a newer release in 2016, topped the charts in Korea and has been popular in Japan, North America and beyond.

Line’s own games are focused on its messaging app, which gives them access to social features such as friend graphs, and they have helped the company become a revenue generation machine. Alongside income from its booming sticker business, in-app purchases within games made Line Japan’s highest-earning non-game app publisher last year, according to App Annie, and the fourth highest worldwide. For some insight into how prolific it has been over the years, Line is ranked as the sixth highest earning iPhone app of all time.

But, despite revenue success, Line has struggled to become a global messaging giant. The big guns WhatsApp and Facebook Messenger have in excess of one billion monthly users each, while Line has been stuck around the 200 million mark for some time. Most of its numbers are from just four countries: Japan, Taiwan, Thailand and Indonesia. While it has been able to tap those markets with additional services like ride-hailing and payments, it is certainly under pressure from those more internationally successful competitors.

With that in mind, doubling down on games makes sense and Line said it plans to focus on non-mobile platforms, which will include the Nintendo Switch among others consoles, from the second half of this year.

Line went public in 2016 via a dual U.S.-Japan IPO that raised over $1 billion.

Powered by WPeMatico

Computer vision researchers build an AI benchmark app for Android phones

A group of computer vision researchers from ETH Zurich want to do their bit to enhance AI development on smartphones. To wit: They’ve created a benchmark system for assessing the performance of several major neural network architectures used for common AI tasks.

They’re hoping it will be useful to other AI researchers but also to chipmakers (by helping them get competitive insights); Android developers (to see how fast their AI models will run on different devices); and, well, to phone nerds — such as by showing whether or not a particular device contains the necessary drivers for AI accelerators. (And, therefore, whether or not they should believe a company’s marketing messages.)

The app, called AI Benchmark, is available for download on Google Play and can run on any device with Android 4.1 or higher — generating a score the researchers describe as a “final verdict” of the device’s AI performance.

AI tasks being assessed by their benchmark system include image classification, face recognition, image deblurring, image super-resolution, photo enhancement or segmentation.

They are even testing some algorithms used in autonomous driving systems, though there’s not really any practical purpose for doing that at this point. Not yet anyway. (Looking down the road, the researchers say it’s not clear what hardware platform will be used for autonomous driving — and they suggest it’s “quite possible” mobile processors will, in future, become fast enough to be used for this task. So they’re at least prepped for that possibility.)

The app also includes visualizations of the algorithms’ output to help users assess the results and get a feel for the current state-of-the-art in various AI fields.

The researchers hope their score will become a universally accepted metric — similar to DxOMark that is used for evaluating camera performance — and all algorithms included in the benchmark are open source. The current ranking of different smartphones and mobile processors is available on the project’s webpage.

The benchmark system and app was around three months in development, says AI researcher and developer Andrey Ignatov.

He explains that the score being displayed reflects two main aspects: The SoC’s speed and available RAM.

“Let’s consider two devices: one with a score of 6000 and one with a score of 200. If some AI algorithm will run on the first device for 5 seconds, then this means that on the second device this will take about 30 times longer, i.e. almost 2.5 minutes. And if we are thinking about applications like face recognition this is not just about the speed, but about the applicability of the approach: Nobody will wait 10 seconds till their phone will be trying to recognize them.

“The same is about memory: The larger is the network/input image — the more RAM is needed to process it. If the phone has a small amount of RAM that is e.g. only enough to enhance 0.3MP photo, then this enhancement will be clearly useless, but if it can do the same job for Full HD images — this opens up much wider possibilities. So, basically the higher score — the more complex algorithms can be used / larger images can be processed / it will take less time to do this.”

Discussing the idea for the benchmark, Ignatov says the lab is “tightly bound” to both research and industry — so “at some point we became curious about what are the limitations of running the recent AI algorithms on smartphones”.

“Since there was no information about this (currently, all AI algorithms are running remotely on the servers, not on your device, except for some built-in apps integrated in phone’s firmware), we decided to develop our own tool that will clearly show the performance and capabilities of each device,” he adds. 

“We can say that we are quite satisfied with the obtained results — despite all current problems, the industry is clearly moving towards using AI on smartphones, and we also hope that our efforts will help to accelerate this movement and give some useful information for other members participating in this development.”

After building the benchmarking system and collating scores on a bunch of Android devices, Ignatov sums up the current situation of AI on smartphones as “both interesting and absurd”.

For example, the team found that devices running Qualcomm chips weren’t the clear winners they’d imagined — i.e. based on the company’s promotional materials about Snapdragon’s 845 AI capabilities and 8x performance acceleration.

“It turned out that this acceleration is available only for ‘quantized’ networks that currently cannot be deployed on the phones, thus for ‘normal’ networks you won’t get any acceleration at all,” he says. “The saddest thing is that actually they can theoretically provide acceleration for the latter networks too, but they just haven’t implemented the appropriated drivers yet, and the only possible way to get this acceleration now is to use Snapdragon’s proprietary SDK available for their own processors only. As a result — if you are developing an app that is using AI, you won’t get any acceleration on Snapdragon’s SoCs, unless you are developing it for their processors only.”

Whereas the researchers found that Huawei’s Kirin’s 970 CPU — which is technically even slower than Snapdragon 636 — offered a surprisingly strong performance.

“Their integrated NPU gives almost 10x acceleration for Neural Networks, and thus even the most powerful phone CPUs and GPUs can’t compete with it,” says Ignatov. “Additionally, Huawei P20/P20 Pro are the only smartphones on the market running Android 8.1 that are currently providing AI acceleration, all other phones will get this support only in Android 9 or later.”

It’s not all great news for Huawei phone owners, though, as Ignatov says the NPU doesn’t provide acceleration for ‘quantized’ networks (though he notes the company has promised to add this support by the end of this year); and also it uses its own RAM — which is “quite limited” in size, and therefore you “can’t process large images with it”…

“We would say that if they solve these two issues — most likely nobody will be able to compete with them within the following year(s),” he suggests, though he also emphasizes that this assessment only refers to the one SoC, noting that Huawei’s processors don’t have the NPU module.

For Samsung processors, the researchers flag up that all the company’s devices are still running Android 8.0 but AI acceleration is only available starting from Android 8.1 and above. Natch.

They also found CPU performance could “vary quite significantly” — up to 50% on the same Samsung device — because of throttling and power optimization logic. Which would then have a knock on impact on AI performance.

For Mediatek, the researchers found the chipmaker is providing acceleration for both ‘quantized’ and ‘normal’ networks — which means it can reach the performance of “top CPUs”.

But, on the flip side, Ignatov calls out the company’s slogan — that it’s “Leading the Edge-AI Technology Revolution” — dubbing it “nothing more than their dream”, and adding: “Even the aforementioned Samsung’s latest Exynos CPU can slightly outperform it without using any acceleration at all, not to mention Huawei with its Kirin’s 970 NPU.”

“In summary: Snapdragon — can theoretically provide good results, but are lacking the drivers; Huawei — quite outstanding results now and most probably in the nearest future; Samsung — no acceleration support now (most likely this will change soon since they are now developing their own AI Chip), but powerful CPUs; Mediatek — good results for mid-range devices, but definitely no breakthrough.”

It’s also worth noting that some of the results were obtained on prototype samples, rather than shipped smartphones, so haven’t yet been included in the benchmark table on the team’s website.

“We will wait till the devices with final firmware will come to the market since some changes might still be introduced,” he adds.

For more on the pros and cons of AI-powered smartphone features check out our article from earlier this year.

Powered by WPeMatico