computing
Auto Added by WPeMatico
Auto Added by WPeMatico
In the same week that Amazon is holding its big AWS confab, Google is also announcing a move to raise its own enterprise game with Google Cloud. Today the company announced that it is acquiring Actifio, a data management company that helps companies with data continuity to be better prepared in the event of a security breach or other need for disaster recovery. The deal squares Google up as a competitor against the likes of Rubrik, another big player in data continuity.
The terms of the deal were not disclosed in the announcement; we’re looking and will update as we learn more. Notably, when the company was valued at over $1 billion in a funding round back in 2014, it had said it was preparing for an IPO (which never happened). PitchBook data estimated its value at $1.3 billion in 2018, but earlier this year it appeared to be raising money at about a 60% discount to its recent valuation, according to data provided to us by Prime Unicorn Index.
The company was also involved in a patent infringement suit against Rubrik, which it also filed earlier this year.
It had raised around $461 million, with investors including Andreessen Horowitz, TCV, Tiger, 83 North, and more.
With Actifio, Google is moving into what is one of the key investment areas for enterprises in recent years. The growth of increasingly sophisticated security breaches, coupled with stronger data protection regulation, has given a new priority to the task of holding and using business data more responsibly, and business continuity is a cornerstone of that.
Google describes the startup as as a “leader in backup and disaster recovery” providing virtual copies of data that can be managed and updated for storage, testing, and more. The fact that it covers data in a number of environments — including SAP HANA, Oracle, Microsoft SQL Server, PostgreSQL, and MySQL, virtual machines (VMs) in VMware, Hyper-V, physical servers, and of course Google Compute Engine — means that it also gives Google a strong play to work with companies in hybrid and multi-vendor environments rather than just all-Google shops.
“We know that customers have many options when it comes to cloud solutions, including backup and DR, and the acquisition of Actifio will help us to better serve enterprises as they deploy and manage business-critical workloads, including in hybrid scenarios,” writes Brad Calder, VP, engineering, in the blog post. :In addition, we are committed to supporting our backup and DR technology and channel partner ecosystem, providing customers with a variety of options so they can choose the solution that best fits their needs.”
The company will join Google Cloud.
“We’re excited to join Google Cloud and build on the success we’ve had as partners over the past four years,” said Ash Ashutosh, CEO at Actifio, in a statement. “Backup and recovery is essential to enterprise cloud adoption and, together with Google Cloud, we are well-positioned to serve the needs of data-driven customers across industries.”
Powered by WPeMatico
While Salesforce made a big splash yesterday with the announcement that it’s buying Slack for $27.7 billion, it’s not the only thing going on for the CRM giant this week. In fact, Dreamforce, the company’s customer extravaganza, is also on the docket. While it is virtual this year, there are still product announcements aplenty, and today the company announced Einstein Automate, a new AI-fueled set of workflow solutions.
Sarah Franklin, EVP & GM of Platform, Trailhead and AppExchange at Salesforce, says that she is seeing companies facing a digital imperative to automate processes as things move ever more quickly online, being driven there even faster by the pandemic. “With Einstein Automate, everyone can change the speed of work and be more productive through intelligent workflow automation,” she said in a statement.
Brent Leary, principal analyst at CRM Essentials, says that combined, these tools are designed to help customers get to work more quickly. “It’s not only about identifying the insight, it’s about making it easier to leverage it at the right time. And this should make it easier for users to do it without spending more time and effort,” Leary told TechCrunch.
Einstein is the commercial name given to Salesforce’s artificial intelligence platform that touches every aspect of the company’s product line, bringing automation to many tasks and making it easier to find the most valuable information on customers, which is often buried in an avalanche of data.
Einstein Automate encompasses several products designed to improve workflows inside organizations. For starters, the company has created Flow Orchestrator, a tool that uses a low-code, drag and drop approach for building workflows, but it doesn’t stop there. It also relies on AI to provide help to suggest logical next steps to speed up workflow creation.
Salesforce is also bringing into the mix MuleSoft, the integration company it bought for $6.5 billion in 2018. Instead of processes like a mortgage approval workflow, the MuleSoft piece lets IT build complex integrations between applications across the enterprise and the Salesforce family of products more easily.
To make it easier to build these workflows, Salesforce is announcing the Einstein Automate collection page available in AppExchange, the company’s application marketplace. The collection includes more than 700 pre-built connectors so customers can grab and go as they build these workflows, and finally it’s updating the OmniStudio, their platform for generating customer experiences. As Salesforce describes it, “Included in OmniStudio is a suite of resources and no-code tools, including pre-built guided experiences, templates and more, allowing users to deploy digital-first experiences like licensing and permit applications quickly and with ease.”
Per usual with Salesforce Dreamforce announcements, the Flow Orchestrator being announced today won’t be available in beta until next summer. The MuleSoft component will be available in early 2021, but the OmniStudio updates and the Einstein connections collection are available today.
Powered by WPeMatico
Jitsu, a graduate of the Y Combinator Summer 2020 cohort, is developing an open-source data integration platform that helps developers send data to a data warehouse. Today, the startup announced a $2 million seed investment.
Costanoa Ventures led the round with participation from Y Combintaor, The House Fund and SignalFire.
In addition to the open-source version of the software, the company has developed a hosted version that companies can pay to use, which shares the same name as the company. Peter Wysinski, Jitsu’s co-founder and CEO, says a good way to think about his company is an open-source Segment, the customer data integration company that was recently sold to Twilio for $3.2 billion.
But, he says, it goes beyond what Segment provides by allowing you to move all kinds of data, whether customer data, connected device data or other types. “If you look at the space in general, companies want more granularity. So let’s say for example, a couple years ago you wanted to sync just your transactions from QuickBooks to your data warehouse, now you want to capture every single sale at the point of sale. What Jitsu lets you do is capture essentially all of those events, all of those streams, and send them to your data warehouse,” Wysinski explained.
Among the data warehouses it currently supports are Amazon Redshift, Google BigQuery, PostGres and Snowflake.
The founders built the open-source project called EventNative to help solve problems they themselves were having moving data around at their previous jobs. After putting the open-source version on GitHub a few months ago, they quickly attained 1,000 stars, proving that they had delivered something that solved a common problem for data teams. They then built the hosted version, Jitsu, which went live a couple of weeks ago.
For now, the company is just the two co-founders, Wysinski and CTO Vladimir Klimontovich and couple of contract engineers, but they intend to do some preliminary hiring over the next year to grow the company, most likely adding engineers. As they begin to build out the startup, Wysinski says that being open source will help drive diversity and inclusion in their hiring.
“The goal is essentially to go after that open-source community and hire people from anywhere because engineers aren’t just […] one color or one race, they’re everywhere, and being open source, and especially being in a remote world, makes it so, so much simpler [to build a diverse workforce], and a lot of companies I feel are going down that road,” he said.
He says along that line, the plan is to be a fully remote company, even after the pandemic ends, as they hire from anywhere. The goal is to have quarterly offsite meetings to check in with employees, but do the majority of the work remotely.
Powered by WPeMatico
IT security software company Ivanti has acquired two security companies: Enterprise mobile security firm MobileIron and corporate virtual network provider Pulse Secure.
In a statement on Tuesday, Ivanti said it bought MobileIron for $872 million in stock — with 91% of the shareholders voting in favor of the deal — and acquired Pulse Secure from its parent company Siris Capital Group, but did not disclose the buying price.
The deals have now closed.
Ivanti was founded in 2017 after Clearlake Capital, which owned Heat Software, bought Landesk from private equity firm Thoma Bravo, and merged the two companies to form Ivanti. The combined company, headquartered in Salt Lake City, focuses largely on enterprise IT security, including endpoint, asset and supply chain management. Since its founding, Ivanti went on to acquire several other companies, including U.K.-based Concorde Solutions and RES Software.
If MobileIron and Pulse Secure seem familiar, both companies have faced their fair share of headlines this year after hackers began exploiting vulnerabilities found in their technologies.
Just last month, the U.K. government’s National Cyber Security Center published an alert that warned of a remotely executable bug in MobileIron, patched in June, allowing hackers to break into enterprise networks. U.S. Homeland Security’s cybersecurity advisory unit CISA said that the bug was being actively used by advanced persistent threat (APT) groups, typically associated with state-backed hackers.
Meanwhile, CISA also warned that Pulse Secure was one of several corporate VPN providers with vulnerabilities that have since become a favorite among hackers, particularly ransomware actors, who abuse the bugs to gain access to a network and deploy the file-encrypting ransomware.
Powered by WPeMatico
Google today introduced a new mobile management and security solution, Android Enterprise Essentials, which, despite its name, is actually aimed at small to medium-sized businesses. The company explains this solution leverages Google’s experience in building Android Enterprise device management and security tools for larger organizations in order to come up with a simpler solution for those businesses with smaller budgets.
The new service includes the basics in mobile device management, with features that allow smaller businesses to require their employees to use a lock screen and encryption to protect company data. It also prevents users from installing apps outside the Google Play Store via the Google Play Protect service, and allows businesses to remotely wipe all the company data from phones that are lost or stolen.
As Google explains, smaller companies often handle customer data on mobile devices, but many of today’s remote device management solutions are too complex for small business owners, and are often complicated to get up-and-running.
Android Enterprise Essentials attempts to make the overall setup process easier by eliminating the need to manually activate each device. And because the security policies are applied remotely, there’s nothing the employees themselves have to configure on their own phones. Instead, businesses that want to use the new solution will just buy Android devices from a reseller to hand out or ship to employees with policies already in place.
Though primarily aimed at smaller companies, Google notes the solution may work for select larger organizations that want to extend some basic protections to devices that don’t require more advanced management solutions. The new service can also help companies get started with securing their mobile device inventory, before they move up to more sophisticated solutions over time, including those from third-party vendors.
The company has been working to better position Android devices for use in workplace over the past several years, with programs like Android for Work, Android Enterprise Recommended, partnerships focused on ridding the Play Store of malware, advanced device protections for high-risk users, endpoint management solutions, and more.
Google says it will roll out Android Enterprise Essentials initially with distributors Synnex in the U.S. and Tech Data in the U.K. In the future, it will make the service available through additional resellers as it takes the solution global in early 2021. Google will also host an online launch event and demo in January for interested customers.
Powered by WPeMatico
Video has worked the same way for a long, long time. And because of its unique qualities, video has been largely immune to the machine learning explosion upending industry after industry. WaveOne hopes to change that by taking the decades-old paradigm of video codecs and making them AI-powered — while somehow avoiding the pitfalls that would-be codec revolutionizers and “AI-powered” startups often fall into.
The startup has until recently limited itself to showing its results in papers and presentations, but with a recently raised $6.5M seed round, they are ready to move towards testing and deploying their actual product. It’s no niche: video compression may seem a bit in the weeds to some, but there’s no doubt it’s become one of the most important processes of the modern internet.
Here’s how it’s worked pretty much since the old days when digital video first became possible. Developers create a standard algorithm for compressing and decompressing video, a codec, which can easily be distributed and run on common computing platforms. This is stuff like MPEG-2, H.264, and that sort of thing. The hard work of compressing a video can be done by content providers and servers, while the comparatively lighter work of decompressing is done on the end user’s machines.
This approach is quite effective, and improvements to codecs (which allow more efficient compression) have led to the possibility of sites like YouTube. If videos were 10 times bigger, YouTube would never have been able to launch when it did. The other major change was beginning to rely on hardware acceleration of said codecs — your computer or GPU might have an actual chip in it with the codec baked in, ready to perform decompression tasks with far greater speed than an ordinary general-purpose CPU in a phone. Just one problem: when you get a new codec, you need new hardware.
But consider this: many new phones ship with a chip designed for running machine learning models, which like codecs can be accelerated, but unlike them the hardware is not bespoke for the model. So why aren’t we using this ML-optimized chip for video? Well, that’s exactly what WaveOne intends to do.
I should say that I initially spoke with WaveOne’s cofounders, CEO Lubomir Bourdev and CTO Oren Rippel, from a position of significant skepticism despite their impressive backgrounds. We’ve seen codec companies come and go, but the tech industry has coalesced around a handful of formats and standards that are revised in a painfully slow fashion. H.265, for instance, was introduced in 2013, but years afterwards its predecessor, H.264, was only beginning to achieve ubiquity. It’s more like the 3G, 4G, 5G system than version 7, version 7.1, etc. So smaller options, even superior ones that are free and open source, tend to get ground beneath the wheels of the industry-spanning standards.
This track record for codecs, plus the fact that startups like to describe practically everything is “AI-powered,” had me expecting something at best misguided, at worst scammy. But I was more than pleasantly surprised: In fact WaveOne is the kind of thing that seems obvious in retrospect and appears to have a first-mover advantage.
The first thing Rippel and Bourdev made clear was that AI actually has a role to play here. While codecs like H.265 aren’t dumb — they’re very advanced in many ways — they aren’t exactly smart, either. They can tell where to put more bits into encoding color or detail in a general sense, but they can’t, for instance, tell where there’s a face in the shot that should be getting extra love, or a sign or trees that can be done in a special way to save time.
But face and scene detection are practically solved problems in computer vision. Why shouldn’t a video codec understand that there is a face, then dedicate a proportionate amount of resources to it? It’s a perfectly good question. The answer is that the codecs aren’t flexible enough. They don’t take that kind of input. Maybe they will in H.266, whenever that comes out, and a couple years later it’ll be supported on high-end devices.
So how would you do it now? Well, by writing a video compression and decompression algorithm that runs on AI accelerators many phones and computers have or will have very soon, and integrating scene and object detection in it from the get-go. Like Krisp.ai understanding what a voice is and isolating it without hyper-complex spectrum analysis, AI can make determinations like that with visual data incredibly fast and pass that on to the actual video compression part.
Variable and intelligent allocation of data means the compression process can be very efficient without sacrificing image quality. WaveOne claims to reduce the size of files by as much as half, with better gains in more complex scenes. When you’re serving videos hundreds of millions of times (or to a million people at once), even fractions of a percent add up, let alone gains of this size. Bandwidth doesn’t cost as much as it used to, but it still isn’t free.
Understanding the image (or being told) also lets the codec see what kind of content it is; a video call should prioritize faces if possible, of course, but a game streamer may want to prioritize small details, while animation requires yet another approach to minimize artifacts in its large single-color regions. This can all be done on the fly with an AI-powered compression scheme.
There are implications beyond consumer tech as well: A self-driving car, sending video between components or to a central server, could save time and improve video quality by focusing on what the autonomous system designates important — vehicles, pedestrians, animals — and not wasting time and bits on a featureless sky, trees in the distance, and so on.
Content-aware encoding and decoding is probably the most versatile and easy to grasp advantage WaveOne claims to offer, but Bourdev also noted that the method is much more resistant to disruption from bandwidth issues. It’s one of the other failings of traditional video codecs that missing a few bits can throw off the whole operation — that’s why you get frozen frames and glitches. But ML-based decoding can easily make a “best guess” based on whatever bits it has, so when your bandwidth is suddenly restricted you don’t freeze, just get a bit less detailed for the duration.
These benefits sound great, but as before the question is not “can we improve on the status quo?” (obviously we can) but “can we scale those improvements?”
“The road is littered with failed attempts to create cool new codecs,” admitted Bourdev. “Part of the reason for that is hardware acceleration; even if you came up with the best codec in the world, good luck if you don’t have a hardware accelerator that runs it. You don’t just need better algorithms, you need to be able to run them in a scalable way across a large variety of devices, on the edge and in the cloud.”
That’s why the special AI cores on the latest generation of devices is so important. This is hardware acceleration that can be adapted in milliseconds to a new purpose. And WaveOne happens to have been working for years on video-focused machine learning that will run on those cores, doing the work that H.26X accelerators have been doing for years, but faster and with far more flexibility.
Of course, there’s still the question of “standards.” Is it very likely that anyone is going to sign on to a single company’s proprietary video compression methods? Well, someone’s got to do it! After all, standards don’t come etched on stone tablets. And as Bourdev and Rippel explained, they actually are using standards — just not the way we’ve come to think of them.
Before, a “standard” in video meant adhering to a rigidly defined software method so that your app or device could work with standards-compatible video efficiently and correctly. But that’s not the only kind of standard. Instead of being a soup-to-nuts method, WaveOne is an implementation that adheres to standards on the ML and deployment side.
They’re building the platform to be compatible with all the major ML distribution and development publishers like TensorFlow, ONNX, Apple’s CoreML, and others. Meanwhile the models actually developed for encoding and decoding video will run just like any other accelerated software on edge or cloud devices: deploy it on AWS or Azure, run it locally with ARM or Intel compute modules, and so on.
It feels like WaveOne may be onto something that ticks all the boxes of a major b2b event: it invisibly improves things for customers, runs on existing or upcoming hardware without modification, saves costs immediately (potentially, anyhow) but can be invested in to add value.
Perhaps that’s why they managed to attract such a large seed round: $6.5 million, led by Khosla Ventures, with $1M each from Vela Partners and Incubate Fund, plus $650K from Omega Venture Partners and $350K from Blue Ivy.
Right now WaveOne is sort of in a pre-alpha stage, having demonstrated the technology satisfactorily but not built a full-scale product. The seed round, Rippel said, was to de-risk the technology, and while there’s still lots of R&D yet to be done, they’ve proven that the core offering works — building the infrastructure and API layers comes next and amounts to a totally different phase for the company. Even so, he said, they hope to get testing done and line up a few customers before they raise more money.
The future of the video industry may not look a lot like the last couple decades, and that could be a very good thing. No doubt we’ll be hearing more from WaveOne as it migrates from lab to product.
Powered by WPeMatico
Cato Networks has spent the last five years building a cloud-based wide area network that lets individuals connect to network resources regardless of where they are. When the pandemic hit, and many businesses shifted to work from home, it was the perfect moment for technology like this. Today, the company was rewarded with a $130 million Series E investment on $1 billion valuation.
Lightspeed Venture Partners led the round, with participation from new investor Coatue and existing investors Greylock, Aspect Ventures/Acrew Capital, Singtel Innov8 and Shlomo Kramer (who is the co-founder and CEO of the company). The company reports it has now raised $332 million since inception.
Kramer is a serial entrepreneur. He co-founded Check Point Software, which went public in 1996, and Imperva, which went public in 2011 and was later acquired by private equity firm Thoma Bravo in 2018. He helped launch Cato in 2015. “In 2015, we identified that the wide area networks (WANs), which is a tens of billions of dollars market, was still built on the same technology stack […] that connects physical locations, and appliances that protect physical locations and was primarily sold by the telcos and MSPs for many years,” Kramer explained.
The idea with Cato was to take that technology and redesign it for a mobile and cloud world, not one that was built for the previous generation of software that lived in private data centers and was mostly accessed from an office. Today they have a cloud-based network of 60 Points of Presence (PoPs) around the world, giving customers access to networking resources and network security no matter where they happen to be.
The bet they made was a good one because the world has changed, and that became even more pronounced this year when COVID hit and forced many people to work from home. Now suddenly having the ability to sign in from anywhere became more important than ever, and they have been doing well, with 2x growth in ARR this year (although he wouldn’t share specific revenue numbers).
As a company getting Series E funding, Kramer doesn’t shy away from the idea of eventually going public, especially since he’s done it twice before, but neither is he ready to commit any time table. For now, he says the company is growing rapidly, with almost 700 customers — and that’s why it decided to take such a large capital influx right now.
Cato currently has 270 employees, with plans to grow to 400 by the end of next year. He says that Cato is a global company with headquarters in Israel, where diversity involves religion, but he is trying to build a diverse and inclusive culture regardless of the location.
“My feeling is that inclusion needs to happen in the earlier stages of the funnel. I’m personally involved in these efforts, at the educational sector level, and when students are ready to be recruited by startups, we are already competitive, and if you look at our employee base it’s very diverse,” Kramer said.
With the new funds, he plans to keep building the company and the product. “There’s a huge opportunity and we want to move as fast as possible. We are also going to make very big investments on the engineering side to take the solution and go to the next level,” he said.
Powered by WPeMatico
Deep Vision, a new AI startup that is building an AI inferencing chip for edge computing solutions, is coming out of stealth today. The six-year-old company’s new ARA-1 processors promise to strike the right balance between low latency, energy efficiency and compute power for use in anything from sensors to cameras and full-fledged edge servers.
Because of its strength in real-time video analysis, the company is aiming its chip at solutions around smart retail, including cashier-less stores, smart cities and Industry 4.0/robotics. The company is also working with suppliers to the automotive industry, but less around autonomous driving than monitoring in-cabin activity to ensure that drivers are paying attention to the road and aren’t distracted or sleepy.
The company was founded by its CTO Rehan Hameed and its Chief Architect Wajahat Qadeer, who recruited Ravi Annavajjhala, who previously worked at Intel and SanDisk, as the company’s CEO. Hameed and Qadeer developed Deep Vision’s architecture as part of a PhD thesis at Stanford.
“They came up with a very compelling architecture for AI that minimizes data movement within the chip,” Annavajjhala explained. “That gives you extraordinary efficiency — both in terms of performance per dollar and performance per watt — when looking at AI workloads.”
Long before the team had working hardware, though, the company focused on building its compiler to ensure that its solution could actually address its customers’ needs. Only then did they finalize the chip design.
As Hameed told me, Deep Vision’s focus was always on reducing latency. While its competitors often emphasize throughput, the team believes that for edge solutions, latency is the more important metric. While architectures that focus on throughput make sense in the data center, Deep Vision CTO Hameed argues that this doesn’t necessarily make them a good fit at the edge.
“[Throughput architectures] require a large number of streams being processed by the accelerator at the same time to fully utilize the hardware, whether it’s through batching or pipeline execution,” he explained. “That’s the only way for them to get their big throughput. The result, of course, is high latency for individual tasks and that makes them a poor fit in our opinion for an edge use case where real-time performance is key.”
To enable this performance — and Deep Vision claims that its processor offers far lower latency than Google’s Edge TPUs and Movidius’ MyriadX, for example — the team is using an architecture that reduces data movement on the chip to a minimum. In addition, its software optimizes the overall data flow inside the architecture based on the specific workload.
“In our design, instead of baking in a particular acceleration strategy into the hardware, we have instead built the right programmable primitives into our own processor, which allows the software to map any type of data flow or any execution flow that you might find in a neural network graph efficiently on top of the same set of basic primitives,” said Hameed.
With this, the compiler can then look at the model and figure out how to best map it on the hardware to optimize for data flow and minimize data movement. Thanks to this, the processor and compiler can also support virtually any neural network framework and optimize their models without the developers having to think about the specific hardware constraints that often make working with other chips hard.
“Every aspect of our hardware/software stack has been architected with the same two high-level goals in mind,” Hameed said. “One is to minimize the data movement to drive efficiency. And then also to keep every part of the design flexible in a way where the right execution plan can be used for every type of problem.”
Since its founding, the company has raised about $19 million and filed nine patents. The new chip has been sampling for a while, and even though the company already has a couple of customers, it chose to remain under the radar until now. The company obviously hopes that its unique architecture can give it an edge in this market, which is getting increasingly competitive. Besides the likes of Intel’s Movidius chips (and custom chips from Google and AWS for their own clouds), there are also plenty of startups in this space, including the likes of Hailo, which raised a $60 million Series B round earlier this year and recently launched its new chips, too.
Powered by WPeMatico
Arrikto, a startup that wants to speed up the machine learning development lifecycle by allowing engineers and data scientists to treat data like code, is coming out of stealth today and announcing a $10 million Series A round. The round was led by Unusual Ventures, with Unusual’s John Vrionis joining the board.
“Our technology at Arrikto helps companies overcome the complexities of implementing and managing machine learning applications,” Arrikto CEO and co-founder Constantinos Venetsanopoulos explained. “We make it super easy to set up end-to-end machine learning pipelines. More specifically, we make it easy to build, train, deploy ML models into production using Kubernetes and intelligent intelligently manage all the data around it.”
Like so many developer-centric platforms today, Arrikto is all about “shift left.” Currently, the team argues, machine learning teams and developer teams don’t speak the same language and use different tools to build models and to put them into production.
“Much like DevOps shifted deployment left, to developers in the software development life cycle, Arrikto shifts deployment left to data scientists in the machine learning life cycle,” Venetsanopoulos explained.
Arrikto also aims to reduce the technical barriers that still make implementing machine learning so difficult for most enterprises. Venetsanopoulos noted that just like Kubernetes showed businesses what a simple and scalable infrastructure could look like, Arrikto can show them what a simpler ML production pipeline can look like — and do so in a Kubernetes-native way.
At the core of Arrikto is Kubeflow, the Google -incubated open-source machine learning toolkit for Kubernetes — and in many ways, you can think of Arrikto as offering an enterprise-ready version of Kubeflow. Among other projects, the team also built MiniKF to run Kubeflow on a laptop and uses Kale, which lets engineers build Kubeflow pipelines from their JupyterLab notebooks.
As Venetsanopoulos noted, Arrikto’s technology does three things: it simplifies deploying and managing Kubeflow, allows data scientists to manage it using the tools they already know, and it creates a portable environment for data science that enables data versioning and data sharing across teams and clouds.
While Arrikto has stayed off the radar since it launched out of Athens, Greece in 2015, the founding team of Venetsanopoulos and CTO Vangelis Koukis already managed to get a number of large enterprises to adopt its platform. Arrikto currently has more than 100 customers and, while the company isn’t allowed to name any of them just yet, Venetsanopoulos said they include one of the largest oil and gas companies, for example.
And while you may not think of Athens as a startup hub, Venetsanopoulos argues that this is changing and there is a lot of talent there (though the company is also using the funding to build out its sales and marketing team in Silicon Valley). “There’s top-notch talent from top-notch universities that’s still untapped. It’s like we have an unfair advantage,” he said.
Powered by WPeMatico
Startups need to live in the future. They create roadmaps, build products and continually upgrade them with an eye on next year — or even a few years out.
Big companies, often the target customers for startups, live in a much more near-term world. They buy technologies that can solve problems they know about today, rather than those they may face a couple bends down the road. In other words, they’re driving a Dodge, and most tech entrepreneurs are driving a DeLorean equipped with a flux-capacitor.
That situation can lead to a huge waste of time for startups that want to sell to enterprise customers: a business development black hole. Startups are talking about technology shifts and customer demands that the executives inside the large company — even if they have “innovation,” “IT,” or “emerging technology” in their titles — just don’t see as an urgent priority yet, or can’t sell to their colleagues.
How do you avoid the aforementioned black hole? Some recent research that my company, Innovation Leader, conducted in collaboration with KPMG LLP, suggests a constructive approach.
Rather than asking large companies about which technologies they were experimenting with, we created four buckets, based on what you might call “commitment level.” (Our survey had 211 respondents, 62% of them in North America and 59% at companies with greater than $1 billion in annual revenue.) We asked survey respondents to assess a list of 16 technologies, from advanced analytics to quantum computing, and put each one into one of these four buckets. We conducted the survey at the tail end of Q3 2020.
Respondents in the first group were “not exploring or investing” — in other words, “we don’t care about this right now.” The top technology there was quantum computing.
Bucket #2 was the second-lowest commitment level: “learning and exploring.” At this stage, a startup gets to educate its prospective corporate customer about an emerging technology — but nabbing a purchase commitment is still quite a few exits down the highway. It can be constructive to begin building relationships when a company is at this stage, but your sales staff shouldn’t start calculating their commissions just yet.
Here are the top five things that fell into the “learning and exploring” cohort, in ranked order:
Technologies in the third group, “investing or piloting,” may represent the sweet spot for startups. At this stage, the corporate customer has already discovered some internal problem or use case that the technology might address. They may have shaken loose some early funding. They may have departments internally, or test sites externally, where they know they can conduct pilots. Often, they’re assessing what established tech vendors like Microsoft, Oracle and Cisco can provide — and they may find their solutions wanting.
Here’s what our survey respondents put into the “investing or piloting” bucket, in ranked order:
By the time a technology is placed into the fourth category, which we dubbed “in-market or accelerating investment,” it may be too late for a startup to find a foothold. There’s already a clear understanding of at least some of the use cases or problems that need solving, and return-on-investment metrics have been established. But some providers have already been chosen, based on successful pilots and you may need to dislodge someone that the enterprise is already working with. It can happen, but the headwinds are strong.
Here’s what the survey respondents placed into the “in-market or accelerating investment” bucket, in ranked order:
Powered by WPeMatico