Scandit raises $80M as COVID-19 drives demand for contactless deliveries

Enterprise barcode scanner company Scandit has closed an $80 million Series C round, led by Silicon Valley VC firm G2VP. Atomico, GV, Kreos, NGP Capital, Salesforce Ventures and Swisscom Ventures also participated in the round — which brings its total raised to date to $123M.

The Zurich-based firm offers a platform that combines computer vision and machine learning tech with barcode scanning, text recognition (OCR), object recognition and augmented reality which is designed for any camera-equipped smart device — from smartphones to drones, wearables (e.g. AR glasses for warehouse workers) and even robots.

Use-cases include mobile apps or websites for mobile shopping; self checkout; inventory management; proof of delivery; asset tracking and maintenance — including in healthcare where its tech can be used to power the scanning of patient IDs, samples, medication and supplies.

It bills its software as “unmatched” in terms of speed and accuracy, as well as the ability to scan in bad light; at any angle; and with damaged labels. Target industries include retail, healthcare, industrial/manufacturing, travel, transport & logistics and more.

The latest funding injection follows a $30M Series B round back in 2018. Since then Scandit says it’s tripled recurring revenues, more than doubling the number of blue-chip enterprise customers, and doubling the size of its global team.

Global customers for its tech include the likes of 7-Eleven, Alaska Airlines, Carrefour, DPD, FedEx, Instacart, Johns Hopkins Hospital, La Poste, Levi Strauss & Co, Mount Sinai Hospital and Toyota — with the company touting “tens of billions of scans” per year on 100+ million active devices at this stage of its business.

It says the new funding will go on further pressing on the gas to grow in new markets, including APAC and Latin America, as well as building out its footprint and ops in North America and Europe. Also on the slate: Funding more R&D to devise new ways for enterprises to transform their core business processes using computer vision and AR.

The need for social distancing during the coronavirus pandemic has also accelerated demand for mobile computer vision on personal smart devices, according to Scandit, which says customers are looking for ways to enable more contactless interactions.

Another demand spike it’s seeing is coming from the pandemic-related boom in ‘Click & Collect’ retail and “millions” of extra home deliveries — something its tech is well positioned to cater to because its scanning apps support BYOD (bring your own device), rather than requiring proprietary hardware.

“COVID-19 has shone a spotlight on the need for rapid digital transformation in these uncertain times, and the need to blend the physical and digital plays a crucial role,” said CEO Samuel Mueller in a statement. “Our new funding makes it possible for us to help even more enterprises to quickly adapt to the new demand for ‘contactless business’, and be better positioned to succeed, whatever the new normal is.”

Also commenting on the funding in a supporting statement, Ben Kortlang, general partner at G2VP, added: “Scandit’s platform puts an enterprise-grade scanning solution in the pocket of every employee and customer without requiring legacy hardware. This bridge between the physical and digital worlds will be increasingly critical as the world accelerates its shift to online purchasing and delivery, distributed supply chains and cashierless retail.”


By Natasha Lomas

Microsoft launches Project Bonsai, its new machine teaching service for building autonomous systems

At its Build developer conference, Microsoft today announced that Project Bonsai, its new machine teaching service, is now in public preview.

If that name sounds familiar, it’s probably because you remember that Microsoft acquired Bonsai, a company that focuses on machine teaching, back in 2018. Bonsai combined simulation tools with different machine learning techniques to build a general-purpose deep reinforcement learning platform, with a focus on industrial control systems.

It’s maybe no surprise then that Project Bonsai, too, has a similar focus on helping businesses teach and manage their autonomous machines. “With Project Bonsai, subject-matter experts can add state-of-the-art intelligence to their most dynamic physical systems and processes without needing a background in AI,” the company notes in its press materials.

“The public preview of Project Bonsai builds on top of the Bonsai acquisition and the autonomous systems private preview announcements made at Build and Ignite of last year,” a Microsoft spokesperson told me.

Interestingly, Microsoft notes that project Bonsai is only the first block of a larger vision to help its customers build these autonomous systems. The company also stresses the advantages of machine teaching over other machine learning approach, especially the fact that it’s less of a black box approach than other methods, which makes it easier for developers and engineers to debug systems that don’t work as expected.

In addition to Bonsai, Microsoft also today announced Project Moab, an open-source balancing robot that is meant to help engineers and developers learn the basics of how to build a real-world control system. The idea here is to teach the robot to keep a ball balanced on top of a platform that is held by three arms.

Potential users will be able to either 3D print the robot themselves or buy one when it goes on sale later this year. There is also a simulation, developed by MathWorks, that developers can try out immediately.

“You can very quickly take it into areas where doing it in traditional ways would not be easy, such as balancing an egg instead,” said Mark Hammond, Microsoft General Manager
for Autonomous Systems. “The point of the Project Moab system is to provide that
playground where engineers tackling various problems can learn how to use the tooling and simulation models. Once they understand the concepts, they can apply it to their novel use case.”


By Frederic Lardinois

Adobe announces AI toolbox for Experience Platform

Most companies don’t have the personnel to do AI well, so they turn to platform vendors like Adobe for help. Like other platforms, it has been building AI into its product set for several years now, but wanted to give marketers a set of tools that take advantage of some advanced AI capabilities out of the box.

Today, the company announced five pre-packaged AI solutions specifically designed to give marketers more intelligent insight. Amit Ahuja, VP of ecosystem development at Adobe, says even before the pandemic, customers were struggling to deal with the onslaught of data and how they could use it to understand their customers better.

“There is so much data coming in, and customers are struggling to leverage this data — and not just for the purpose of analytics and insights, which is a huge part of it, but also to do predictive optimization,” Ahuja explained.

What’s more, we’ve known for some time that when there is so much data, it becomes impossible to make sense of it manually. Given that AI deals best with tons of data, Adobe wanted to take advantage of that, while packaging some popular data scenarios in a way that makes it easy for marketers to get insights.

That data comes from the Adobe Experience Platform, which the is designed to pull data not only from Adobe products, but from a variety of enterprise sources to help marketers build a more complete picture of their customers and get answers to key questions.

Customer Insights AI helps users understand their customers better. Image Credit: Adobe

The company is announcing a total of five AI tools today, two of which are generally available with the remainder in Beta for now. For starters, Customer AI helps marketers understand why their customers do what they do. For instance, why they keep coming back or why they stopped. Attribution AI helps marketers understand how effective their strategies are, something that’s always important, but especially in this economy where effectively deploying spend is more important than ever.

The first of the Beta tools is Journey AI, which helps marketers decide the best channel to engage customers. Content and Commerce AI looks at the most effective way to deliver content and finally Leads AI looks at the visitors most likely to convert to customers.

These five are just a start, and the company plans to add new tools to the toolbox as customers look for additional insights from the data to help them improve their marketing outcomes.


By Ron Miller

SiMa.ai announces $30M Series A to build out lower-power edge chip solution

Krishna Rangasayee, founder and CEO, at SiMa.ai, has 30 years of experience in the semiconductor industry. He decided to put that experience to work in a startup and launched SiMa.ai last year with the goal of building an ultra low-power software and chip solution for machine learning at the edge.

Today he announced a $30 million Series A led by Dell Technologies Capital with help from Amplify Partners, Wing Venture Capital and +ND Capital. Today’s investment brings the total raised to $40 million, according to the company.

Rangasayee says in his years as a chip executive he saw a gap in the machine learning market for embedded devices running at the edge and he decided to start the company to solve that issue.

“While the majority of the market was serviced by traditional computing, machine learning was beginning to make an impact and it was really amazing. I wanted to build a company that would bring machine learning at significant scale to help the problems with embedded markets,” he told TechCrunch.

The company is trying to focus on efficiency, which it says will make the solution more environmentally friendly by using less power. “Our solution can scale high performance at the lowest power efficiency, and that translates to the highest frames per second per watt. We have built out an architecture and a software solution that is at a minimum 30x better than anybody else on the frames per second,” he explained.

He added that achieving that efficiency required them to build a chip from scratch because there isn’t a solution available off the shelf today that could achieve that.

So far the company has attracted 20 early design partners, who are testing what they’ve built. He hopes to have the chip designed and the software solution in Beta in the Q4 timeframe this year, and is shooting for chip production by Q2 in 2021.

He recognizes that it’s hard to raise this kind of money in the current environment and he’s grateful to the investors, and the design partners who believe in his vision. The timing could actually work in the company’s favor because it can hunker down and build product while navigating through the current economic malaise.

Perhaps by 2021 when the product is in production, the market and the economy will be in better shape and the company will be ready to deliver.


By Ron Miller

Amazon releases Kendra to solve enterprise search with AI and machine learning

Enterprise search has always been a tough nut to crack. The holy grail has always been to operate like Google, but in-house. You enter a few keywords and you get back that nearly perfect response at the top of the list of the results. The irony of trying to do search locally has been a lack of content.

While Google has the universe of the World Wide Web to work with, enterprises have a much narrower set of responses. It would be easy to think that should make it easier to find the ideal response, but the fact is that it’s the opposite. The more data you have, the more likely you’ll find the correct document.

Amazon is trying to change the enterprise search game by putting it into a more modern machine-learning driven context to use today’s technology to help you find that perfect response just as you typically do on the web.

Today the company announced the general availability of Amazon Kendra, its cloud enterprise search product that the company announced last year at AWS re:Invent. It uses natural language processing to allow the user to simply ask a question, then searches across the repositories connected to the search engine to find a precise answer.

“Amazon Kendra reinvents enterprise search by allowing end-users to search across multiple silos of data using real questions (not just keywords) and leverages machine learning models under the hood to understand the content of documents and the relationships between them to deliver the precise answers they seek (instead of a random list of links),” the company described the new service in a statement.

AWS has tuned the search engine for specific industries including IT, healthcare, and insurance. It promises energy, industrial, financial services, legal, media and entertainment, travel and hospitality, human resources, news, telecommunications, mining, food and beverage and automotive will be coming later this year.

This means any company in one of those industries should have a head start when it comes to searching because the system will understand the language specific to those verticals. You can drop your Kendra search box into an application or a website, and it has features like type ahead you would expect in a tool like this.

Enterprise search has been around for a long time, but perhaps by bringing AI and machine learning to bear on it, we can finally solve it once and for all.


By Ron Miller

Enterprise companies find MLOps critical for reliability and performance

Enterprise startups UIPath and Scale have drawn huge attention in recent years from companies looking to automate workflows, from RPA (robotic process automation) to data labeling.

What’s been overlooked in the wake of such workflow-specific tools has been the base class of products that enterprises are using to build the core of their machine learning (ML) workflows, and the shift in focus toward automating the deployment and governance aspects of the ML workflow.

That’s where MLOps comes in, and its popularity has been fueled by the rise of core ML workflow platforms such as Boston-based DataRobot. The company has raised more than $430 million and reached a $1 billion valuation this past fall serving this very need for enterprise customers. DataRobot’s vision has been simple: enabling a range of users within enterprises, from business and IT users to data scientists, to gather data and build, test and deploy ML models quickly.

Founded in 2012, the company has quietly amassed a customer base that boasts more than a third of the Fortune 50, with triple-digit yearly growth since 2015. DataRobot’s top four industries include finance, retail, healthcare and insurance; its customers have deployed over 1.7 billion models through DataRobot’s platform. The company is not alone, with competitors like H20.ai, which raised a $72.5 million Series D led by Goldman Sachs last August, offering a similar platform.

Why the excitement? As artificial intelligence pushed into the enterprise, the first step was to go from data to a working ML model, which started with data scientists doing this manually, but today is increasingly automated and has become known as “auto ML.” An auto-ML platform like DataRobot’s can let an enterprise user quickly auto-select features based on their data and auto-generate a number of models to see which ones work best.

As auto ML became more popular, improving the deployment phase of the ML workflow has become critical for reliability and performance — and so enters MLOps. It’s quite similar to the way that DevOps has improved the deployment of source code for applications. Companies such as DataRobot and H20.ai, along with other startups and the major cloud providers, are intensifying their efforts on providing MLOps solutions for customers.

We sat down with DataRobot’s team to understand how their platform has been helping enterprises build auto-ML workflows, what MLOps is all about and what’s been driving customers to adopt MLOps practices now.

The rise of MLOps


By Walter Thompson

Run:AI brings virtualization to GPUs running Kubernetes workloads

In the early 2000s, VMware introduced the world to virtual servers that allowed IT to make more efficient use of idle server capacity. Today, Run:AI is introducing that same concept to GPUs running containerized machine learning projects on Kubernetes.

This should enable data science teams to have access to more resources than they would normally get were they simply allocated a certain number of available GPUs. Company CEO and co-founder Omri Geller says his company believes that part of the issue in getting AI projects to market is due to static resource allocation holding back data science teams.

“There are many times when those important and expensive computer sources are sitting idle, while at the same time, other users that might need more compute power since they need to run more experiments and don’t have access to available resources because they are part of a static assignment,” Geller explained.

To solve that issue of static resource allocation, Run:AI came up with a solution to virtualize those GPU resources, whether on prem or in the cloud, and let IT define by policy how those resources should be divided.

“There is a need for a specific virtualization approaches for AI and actively managed orchestration and scheduling of those GPU resources, while providing the visibility and control over those compute resources to IT organizations and AI administrators,” he said.

Run:AI creates a resource pool, which allocates based on need. Image Credits Run:AI

Run:AI built a solution to bridge this gap between the resources IT is providing to data science teams and what they require to run a given job, while still giving IT some control over defining how that works.

“We really help companies get much more out of their infrastructure, and we do it by really abstracting the hardware from the data science, meaning you can simply run your experiment without thinking about the underlying hardware, and at any moment in time you can consume as much compute power as you need,” he said.

While the company is still in its early stages, and the current economic situation is hitting everyone hard, Geller sees a place for a solution like Run:AI because it gives customers the capacity to make the most out of existing resources, while making data science teams run more efficiently.

He also is taking a realistic long view when it comes to customer acquisition during this time. “These are challenging times for everyone,” he says. “We have plans for longer time partnerships with our customers that are not optimized for short term revenues.”

Run:AI was founded in 2018. It has raised $13 million, according to Geller. The company is based in Israel with offices in the United States. It currently has 25 employees and a few dozen customers.


By Ron Miller

Tecton.ai emerges from stealth with $20M Series A to build machine learning platform

Three former Uber engineers, who helped build the company’s Michelangelo machine learning platform, left the company last year to form Tecton.ai and build an operational machine learning platform for everyone else. Today the company announced a $20 million Series A from a couple of high-profile investors.

Andreessen Horowitz and Sequoia Capital co-led the round with Martin Casado, general partner at a16z and Matt Miller, partner at Sequoia joining the company board under the terms of the agreement. Today’s investment combined with the seed they used to spend the last year building the product comes to $25 million. Not bad in today’s environment.

But when you have the pedigree of these three founders — CEO Mike Del Balso, CTO Kevin Stumpf and VP of Engineering Jeremy Hermann all helped build the Uber system —  investors will spend some money, especially when you are trying to solve a difficult problem around machine learning.

The Michelangelo system was the machine learning platform at Uber that looked at things like driver safety, estimated arrival time and fraud detection, among other things. The three founders wanted to take what they had learned at Uber and put it to work for companies struggling with machine learning.

“What Tecton is really about is helping organizations make it really easy to build production-level machine learning systems, and put them in production and operate them correctly. And we focus on the data layer of machine learning,” CEO Del Balso told TechCrunch.

Image Credit: Tecton.ai

Del Balso says part of the problem, even for companies that are machine learning-savvy, is building and reusing models across different use cases. In fact, he says the vast majority of machine learning projects out there are failing, and Tecton wanted to give these companies the tools to change that.

The company has come up with a solution to make it much easier to create a model and put it to work by connecting to data sources, making it easier to reuse the data and the models across related use cases. “We’re focused on the data tasks related to machine learning, and all the data pipelines that are related to power those models,” Del Balso said.

Certainly Martin Casado from a16z sees a problem in search of a solution and he likes the background of this team and its understanding of building a system like this at scale. “After tracking a number of deep engagements with top ML teams and their interest in what Tecton was building, we invested in Tecton’s A alongside Sequoia. We strongly believe that these systems will continue to increasingly rely on data and ML models, and an entirely new tool chain is needed to aid in developing them…,” he wrote in a blog post announcing the funding.

The company currently has 17 employees and is looking to hire, particularly data scientists and machine learning engineers, with a goal of 30 employees by the end of the year.

While Del Balso is certainly cognizant of the current economic situation, he believes he can still build this company because he’s solving a problem that people genuinely are looking for help with right now around machine learning.

“From the customers we’re talking to, they need to solve these problems, and so we don’t see things slowing down,” he said.


By Ron Miller

Granulate announces $12M Series A to optimize infrastructure performance

As companies increasingly look to find ways to cut costs, Granulate, an early-stage Israeli startup, has come up with a clever way to optimize infrastructure usage. Today it was rewarded with a tidy $12 million Series A investment.

Insight Partners led the round with participation from TLV Partners and Hetz Ventures. Lonne Jaffe, managing director at Insight Partners, will be joining the Granulate board under the terms of the agreement. Today’s investment brings the total raised to $15.6 million, according to the company.

The startup claims it can cut infrastructure costs, whether on-prem or in the cloud, from between 20% and 80%. This is not insignificant if they can pull this off, especially in the economic maelstrom in which we find ourselves.

Asaf Ezra, co-founder and CEO at Granulate, says the company achieved the efficiency through a lot of studying about how Linux virtual machines work. Over six months of experimentation, they simply moved the bottleneck around until they learned how to take advantage of the way the Linux kernel operates to gain massive efficiencies.

It turns out that Linux has been optimized for resource fairness, but Granulate’s founders wanted to flip this idea on its head and look for repetitiveness, concentrating on one function instead of fair allocation across many functions, some of which might not really need access at any given moment.

“When it comes to production systems, you have a lot of repetitiveness in the machine, and you basically want it to do one thing really well,” he said.

He points out that it doesn’t even have to be a VM. It could also be a container or a pod in Kubernetes. The important thing to remember is that you no longer care about the interactivity and fairness inherent in Linux; instead, you want that the machine to be optimized for certain things.

“You let us know what your utility function for that production system is, then our agents. basically optimize all the decision making for that utility function. That means that you don’t even have to do any code changes to gain the benefit,” Ezra explained.

What’s more, the solution uses machine learning to help understand how the different utility functions work to provide greater optimization to improve performance even more over time.

Insight’s Jaffe certainly recognized the potential of such a solution, especially right now.

“The need to have high-performance digital experiences and lower infrastructure costs has never been more important, and Granulate has a highly differentiated offering powered by machine learning that’s not dependent on configuration management or cloud resource purchasing solutions,” Jaffe said in a statement.

Ezra understands that a product like his could be particularly helpful at the moment. “We’re in a unique position. Our offering right now helps organizations survive the downturn by saving costs without firing people,” he said.

The company was founded in 2018 and currently has 20 employees. They plan to double that by the end of 2020.


By Ron Miller

Google Cloud’s fully-managed Anthos is now generally available for AWS

A year ago, back in the days of in-person conferences, Google officially announced the launch of its Anthos multi-cloud application modernization platform at its Cloud Next conference. The promise of Anthos was always that it would allow enterprises to write their applications once, package them into containers and then manage their multi-cloud deployments across GCP, AWS, Azure and their on-prem data centers.

Until now, support for AWS and Azure was only available in preview, but today, the company is making support for AWS and on-premises generally available. Microsoft Azure support remains in preview, though.

“As an AWS customer now, or a GCP customer, or a multi-cloud customer, […] you can now run Anthos on those environments in a consistent way, so you don’t have to learn any proprietary APIs and be locked in,” Eyal Manor, the VP of engineering in charge of Anthos, told me. “And for the first time, we enable the portability between different infrastructure environments as opposed to what has happened in the past where you were locked into a set of API’s.”

Manor stressed that Anthos was designed to be multi-cloud from day one. As for why AWS support is launching ahead of Azure, Manor said that there was simply more demand for it. “We surveyed the customers and they said, hey, we want, in addition to GCP, we want AWS,” he said. But support for Azure will come later this year and the company already has a number of preview customers for it. In addition, Anthos will also come to bare metal servers in the future.

Looking even further ahead, Manor also noted that better support for machine learning workloads in on the way. Many businesses, after all, want to be able to update and run their models right where their data resides, no matter what cloud that may be. There, too, the promise of Anthos is that developers can write the application once and then run it anywhere.

“I think a lot of the initial response and excitement was from the developer audiences,” Jennifer Lin, Google Cloud’s VP of product management, told me. “Eric Brewer had led a white paper that we did to say that a lot of the Anthos architecture sort of decouples the developer and the operator stakeholder concerns. There hadn’t been a multi-cloud shared software architecture where we could do that and still drive emerging and existing applications with a common shared software stack.”

She also noted that a lot of Google Cloud’s ecosystem partners endorsed the overall Anthos architecture early on because they, too, wanted to be able to write once and run anywhere — and so do their customers.

Plaid is one of the launch partners for these new capabilities. “Our customers rely on us to be always available and as a result we have very high reliability requirements,” said Naohiko Takemura, Plaid’s head of engineering. “We pursued a multi-cloud strategy to ensure redundancy for our critical KARTE service. Google Cloud’s Anthos works seamlessly across GCP and our other cloud providers preventing any business disruption. Thanks to Anthos, we prevent vendor lock-in, avoid managing cloud-specific infrastructure, and our developers are not constrained by cloud providers.”

With this release, Google Cloud is also bringing deeper support for virtual machines to Anthos, as well as improved policy and configuration management.

Over the next few months, the Anthos Service Mesh will also add support for applications that run in traditional virtual machines. As Lin told me, “a lot of this is is about driving better agility and talking the complexity out of it so that we have abstractions that work across any environment, whether it’s legacy or new or on-prem or AWS or GCP.”


By Frederic Lardinois

Comet AI nabs $4.5M for more efficient machine learning model management

As we get further along in the new way of working, the new normal if you will, finding more efficient ways to do just about everything is becoming paramount for companies looking at buying new software services. To that end, Comet AI announced a $4.5 million investment today as it tries to build a more efficient machine learning platform.

The money came from existing investors Trilogy Equity Partners, Two Sigma Ventures and Founder’s Co-op. Today’s investment comes on top of an earlier $2.3 million seed.

“We provide a self-hosted and cloud-based meta machine learning platform, and we work with data science AI engineering teams to manage their work to try and explain and optimize their experiments and models,” company co-founder and CEO Gideon Mendels told TechCrunch.

In a growing field with lots of competitors, Mendels says his company’s ability to move easily between platforms is a key differentiator.

“We’re essentially infrastructure agnostic, so we work whether you’re training your models on your laptop, your private cluster or on many of the cloud providers. It doesn’t actually matter, and you can switch between them,” he explained.

The company has 10,000 users on its platform across a community product and a more advanced enterprise product that includes customers like Boeing, Google and Uber.

Mendels says Comet has been able to take advantage of the platform’s popularity to build models based on data customers have made publicly available. The first one involves predicting when a model begins to show training fatigue. The Comet model can see when this happening and signal data scientists to shut the model down 30% faster than this kind of fatigue would normally surface.

The company launched in Seattle at TechStars/Alexa in 2017. The community product debuted in 2018.


By Ron Miller

Will China’s coronavirus-related trends shape the future for American VCs?

For the past month, VC investment pace seems to have slacked off in the U.S., but deal activities in China are picking up following a slowdown prompted by the COVID-19 outbreak.

According to PitchBook, “Chinese firms recorded 66 venture capital deals for the week ended March 28, the most of any week in 2020 and just below figures from the same time last year,” (although 2019 was a slow year). There is a natural lag between when deals are made and when they are announced, but still, there are some interesting trends that I couldn’t help noticing.

While many U.S.-based VCs haven’t had a chance to focus on new deals, recent investment trends coming out of China may indicate which shifts might persist after the crisis and what it could mean for the U.S. investor community.

Image Credits: PitchBook


By Walter Thompson

AWS and Facebook launch an open-source model server for PyTorch

AWS and Facebook today announced two new open-source projects around PyTorch, the popular open-source machine learning framework. The first of these is TorchServe, a model serving framework for PyTorch that will make it easier for developers to put their models into production. The other is TorchElastic, a library that makes it easier for developers to build fault-tolerant training jobs on Kubernetes clusters, including AWS’s EC2 spot instances and Elastic Kubernetes Service.

In many ways, the two companies are taking what they have learned from running their own machine learning systems at scale and are putting this into the project. For AWS, that’s mostly SageMaker, the company’s machine learning platform, but as Bratin Saha, AWS VP and GM for Machine Learning Services, told me, the work on PyTorch was mostly motivated by requests from the community. And while there are obviously other model servers like TensorFlow Serving and the Multi Model Server available today, Saha argues that it would be hard to optimize those for PyTorch.

“If we tried to take some other model server, we would not be able to quote optimize it as much, as well as create it within the nuances of how PyTorch developers like to see this,” he said. AWS has lots of experience in running its own model servers for SageMaker that can handle multiple frameworks, but the community was asking for a model server that was tailored toward how they work. That also meant adapting the server’s API to what PyTorch developers expect from their framework of choice, for example.

As Saha told me, the server that AWS and Facebook are now launching as open source is similar to what AWS is using internally. “It’s quite close,” he said. “We actually started with what we had internally for one of our model servers and then put it out to the community, worked closely with Facebook, to iterate and get feedback — and then modified it so it’s quite close.”

Bill Jia, Facebook’s VP of AI Infrastructure, also told me, he’s very happy about how his team and the community has pushed PyTorch forward in recent years. “If you look at the entire industry community — a large number of researchers and enterprise users are using AWS,” he said. “And then we figured out if we can collaborate with AWS and push PyTorch together, then Facebook and AWS can get a lot of benefits, but more so, all the users can get a lot of benefits from PyTorch. That’s our reason for why we wanted to collaborate with AWS.”

As for TorchElastic, the focus here is on allowing developers to create training systems that can work on large distributed Kubernetes clusters where you might want to use cheaper spot instances. Those are preemptible, though, so your system has to be able to handle that, while traditionally, machine learning training frameworks often expect a system where the number of instances stays the same throughout the process. That, too, is something AWS originally built for SageMaker. There, it’s fully managed by AWS, though, so developers never have to think about it. For developers who want more control over their dynamic training systems or stay very close to the metal, TorchElastic now allows them to recreate this experience on their own Kubernetes clusters.

AWS has a bit of a reputation when it comes to open source and its engagement with the open-source community. In this case, though, it’s nice to see AWS lead the way to bring some of its own work on building model servers, for example, to the PyTorch community. In the machine learning ecosystem, that’s very much expected, and Saha stressed that AWS has long engaged with the community as one of the main contributors to MXNet and through its contributions to projects like Jupyter, TensorFlow and libraries like NumPy.


By Frederic Lardinois

Pileus helps businesses cut their cloud spend

Israel-based Pileus, which is officially launching today, aims to help businesses keep their cloud spend under control. The company also today announced that it has raised a $1 million seed round from a private angel investor.

Using machine learning, the company’s platform continuously learns about how a user typically uses a given cloud and then provides forecasts and daily personalized recommendations to help them stay within a budget.

Pileus currently supports AWS, with support for Google Cloud and Microsoft Azure coming soon.

With all of the information it gathers about your cloud usage, the service can also monitor usage for any anomalies. Because, at its core, Pileus keeps a detailed log of all your cloud spend, it also can provide detailed reports and dashboards of what a user is spending on each project and resource.

If you’ve ever worked on a project like this, you know that these reports are only as good as the tags you use to identify each project and resource, so Pileus makes that a priority on its platform, with a tagging tool that helps enforce tagging policies.

“My team and I spent many sleepless nights working on this solution,” says Pileus CEO Roni Karp. “We’re thrilled to finally be able to unleash Pileus to the masses and help everyone gain more efficiency of their cloud experience while helping them understand their usage and costs better than ever before.”

Pileus currently offers a free 30-day trial. After that, users can either opt to pay $180/month or $800 per year. At those prices, the service isn’t exactly useful until your cloud spend is significantly more than that, of course.

The company isn’t just focused on individual businesses, though. It’s also targeting managed service providers that can use the platform to create reports and manage their own customer billing. Karp believes this will become a significant source of revenue for Pileus because “there are not many good tools in the field today, especially for Azure.”

It’s no secret that Pileus is launching into a crowded market, where well-known incumbents like Cloudability already share mindshare with a growing number of startups. Karp, however, believes that Pileus can stand out, largely because of its machine learning platform and its ability to provide users with immediate value, whereas, he argues, it often takes several weeks for other platforms to deliver results.

 


By Frederic Lardinois

Pinpoint releases dashboard to bring visibility to software engineering operations

As companies look for better ways to understand how different departments work at a granular level, engineering has traditionally been a black box of siloed data. Pinpoint, an Austin-based startup has been working on a platform to bring this information into a single view, and today it released a dashboard to help companies understand what’s happening across software engineering from an operational perspective.

Jeff Haynie, co-founder and CEO at Pinpoint says the company’s mission for the last two years has been giving greater visibility into the  engineering department, something he says is even more important in the current context with workers spread out at home.

“Companies give engineering a bunch of money, and they build a bunch of amazing things, but in the end it is just a black box and we really don’t know what happens,” Haynie said. He says his company has been working to take all of the data to try and contextualize it, bring it together and correlate that information.

Today, they are introducing a dashboard that takes what they’ve been building and pulls it together into a single view, which is 100% self serve. Prior to this you needed a bunch of hand-holding from Pinpoint personnel to get it up and running, but today you can download the product and sign into your various services such as your git repository, your CI/CD software, your IDE and so forth.

What’s more it provides a way for engineering personnel to communicate with one another without leaving the tool.

Pinpoint software engineering dashboard. Image Credit: Pinpoint

“Obviously we will handhold and help people as they need it, and we have an enterprise version of the product with a higher level of SLA, and we have a customer success team to do that, but we’ve really focused this new release on purely self service,” Haynie said.

What’s more, while there is a free version already for teams under 10 people that’s free forever, with the release of today’s product, the company is offering unlimited access to the dashboard for free for three months.

Haynie says they’re like any startup right now, but having experience with several other startups and having lived through 9/11, the dot-com crash, 2008 and so forth, he knows how to hunker down and preserve cash. At the same time, he says they are seeing a lot of in-bound interest in the product, and they wanted to come up with a creative way to help customers through this crisis, while putting the product out there for people to use.

“We’re like any other startup or any other business frankly at this point: we’re nervous and scared. How do you survive this [and how long will it last]. The other side of it is that we’re rushing to take advantage of this inbound interest that we’re getting and trying to sort of seize the opportunity and try to be creative about how we help them.”

The startup hopes that if companies find the product useful, after three months they won’t mind paying for the full version. For now, it’s just putting it out there for free and seeing what happens with it — just another startup trying to find a way through this crisis.


By Ron Miller