Why AWS built a no-code tool

AWS today launched Amazon Honeycode, a no-code environment built around a spreadsheet-like interface that is a bit of a detour for Amazon’s cloud service. Typically, after all, AWS is all about giving developers all of the tools to build their applications — but they then have to put all of the pieces together. Honeycode, on the other hand, is meant to appeal to non-coders who want to build basic line-of-business applications. If you know how to work a spreadsheet and want to turn that into an app, Honeycode is all you need.

To understand AWS’s motivation behind the service, I talked to AWS VP Larry Augustin and Meera Vaidyanathan, a general manager at AWS.

“For us, it was about extending the power of AWS to more and more users across our customers,” explained Augustin. “We consistently hear from customers that there are problems they want to solve, they would love to have their IT teams or other teams — even outsourced help — build applications to solve some of those problems. But there’s just more demand for some kind of custom application than there are available developers to solve it.”

Image Credits: Amazon

In that respect then, the motivation behind Honeycode isn’t all that different from what Microsoft is doing with its PowerApps low-code tool. That, too, after all, opens up the Azure platform to users who aren’t necessarily full-time developers. AWS is taking a slightly different approach here, though, but emphasizing the no-code part of Honeycode.

“Our goal with honey code was to enable the people in the line of business, the business analysts, project managers, program managers who are right there in the midst, to easily create a custom application that can solve some of the problems for them without the need to write any code,” said Augustin. “And that was a key piece. There’s no coding required. And we chose to do that by giving them a spreadsheet-like interface that we felt many people would be familiar with as a good starting point.”

A lot of low-code/no-code tools also allow developers to then “escape the code,” as Augstin called it, but that’s not the intent here and there’s no real mechanism for exporting code from Honeycode and take it elsewhere, for example. “One of the tenets we thought about as we were building Honeycode was, gee, if there are things that people want to do and we would want to answer that by letting them escape the code — we kept coming back and trying to answer the question, ‘Well, okay, how can we enable that without forcing them to escape the code?’ So we really tried to force ourselves into the mindset of wanting to give people a great deal of power without escaping to code,” he noted.

Image Credits: Amazon

There are, however, APIs that would allow experienced developers to pull in data from elsewhere. Augustin and Vaidyanathan expect that companies may do this for their users on tthe platform or that AWS partners may create these integrations, too.

Even with these limitations, though, the team argues that you can build some pretty complex applications.

“We’ve been talking to lots of people internally at Amazon who have been building different apps and even within our team and I can honestly say that we haven’t yet come across something that is impossible,” Vaidyanathan said. “I think the level of complexity really depends on how expert of a builder you are. You can get very complicated with the expressions [in the spreadsheet] that you write to display data in a specific way in the app. And I’ve seen people write — and I’m not making this up — 30-line expressions that are just nested and nested and nested. So I really think that it depends on the skills of the builder and I’ve also noticed that once people start building on Honeycode — myself included — I start with something simple and then I get ambitious and I want to add this layer to it — and I want to do this. That’s really how I’ve seen the journey of builders progress. You start with something that’s maybe just one table and a couple of screens, and very quickly, before you know, it’s a far more robust app that continues to evolve with your needs.”

Another feature that sets Honeycode apart is that a spreadsheet sits at the center of its user interface. In that respect, the service may seem a bit like Airtable, but I don’t think that comparison holds up, given that both then take these spreadsheets into very different directions. I’ve also seen it compared to Retool, which may be a better comparison, but Retool is going after a more advanced developer and doesn’t hide the code. There is a reason, though, why these services were built around them and that is simply that everybody is familiar with how to use them.

“People have been using spreadsheets for decades,” noted Augustin. “They’re very familiar. And you can write some very complicated, deep, very powerful expressions and build some very powerful spreadsheets. You can do the same with Honeycode. We felt people were familiar enough with that metaphor that we could give them that full power along with the ability to turn that into an app.”

The team itself used the service to manage the launch of Honeycode, Vaidyanathan stressed — and to vote on the name for the product (though Vaidyanathan and Augustin wouldn’t say which other names they considered.

“I think we have really, in some ways, a revolutionary product in terms of bringing the power of AWS and putting it in the hands of people who are not coders,” said Augustin.


By Frederic Lardinois

AWS launches Amazon Honeycode, a no-code mobile and web app builder

AWS today announced the beta launch of Amazon Honeycode, a new fully managed low-code/no-code development tool that aims to make it easy for anybody in a company to build their own applications. All of this, of course, is backed by a database in AWS and a web-based drag-and-drop interface builder.

Developers can build applications for up to 20 users for free. After that, the pay per user and for the storage their applications take up.

Image Credits: Amazon/AWS

Like similar tools, Honeycode provides users with a set of templates for commonly use cases like to-do list applications, customer trackers, surveys, schedules and inventory management. Traditionally, AWS argues, a lot of businesses have relied on shared spreadsheets to do these things.

“Customers try to solve for the static nature of spreadsheets by emailing them back and forth, but all of the emailing just compounds the inefficiency because email is slow, doesn’t scale, and introduces versioning and data syncing errors,” the company notes in today’s announcement. “As a result, people often prefer having custom applications built, but the demand for custom programming often outstrips developer capacity, creating a situation where teams either need to wait for developers to free up or have to hire expensive consultants to build applications.”

It’s no surprise then that Honeycode uses a spreadsheet view as its core data interface, which makes sense, given how familiar virtually every potential user is with this concept. To manipulate data, users can work with standard spreadsheet-style formulas, which seems to be about the closest the service gets to actual programming.

AWS says these databases can easily scale up to 100,000 rows per workbook. With this, AWS argues, users can then focus on building their applications without having to worry about the underlying infrastructure.

As of now, it doesn’t look like users will be able to bring in any outside data sources, though that may still be on the company’s roadmap. On the other hand, these kinds of integrations would also complicate the process of building an app and it looks like AWS is trying to keep things simple for now.

Honeycode currently only runs in the AWS US West region in Oregon but is coming to other regions soon.

Among Honeycode’s first customers are SmugMug and Slack. “We’re excited about the opportunity that Amazon Honeycode creates for teams to build apps to drive and adapt to today’s ever-changing business landscape,” said Brad Armstrong, VP of Business and Corporate Development at Slack in today’s release. “We see Amazon Honeycode as a great complement and extension to Slack and are excited about the opportunity to work together to create ways for our joint customers to work more efficiently and to do more with their data than ever before.”


By Frederic Lardinois

Dell’s debt hangover from $67B EMC deal could put VMware stock in play

When Dell bought EMC in 2016 for $67 billion it was one of the biggest acquisitions in tech history, and it brought with it a boatload of debt. Since then Dell has been working on ways to mitigate that debt by selling off various pieces of the corporate empire and going public again, but one of its most valuable assets remains VMware, a company that came over as part of the huge EMC deal.

The Wall Street Journal reported yesterday that Dell is considering selling part of its stake in VMware. The news sent the stock of both companies soaring.

It’s important to understand that even though VMware is part of the Dell family, it runs as a separate company, with its own stock and operations, just as it did when it was part of EMC. Still, Dell owns 81% of that stock, so it could sell a substantial stake and still own a majority the company, or it could sell it all, or incorporate into the Dell family, or of course it could do nothing at all.

Patrick Moorhead, founder and principal analyst at Moor Insights & Strategy thinks this might just be about floating a trial balloon. “Companies do things like this all the time to gauge value, together and apart, and my hunch is this is one of those pieces of research,” Moorhead told TechCrunch.

But as Holger Mueller, an analyst with Constellation Research, points out, it’s an idea that could make sense. “It’s plausible. VMware is more valuable than Dell, and their innovation track record is better than Dell’s over the last few years,” he said.

Mueller added that Dell has been juggling its debts since the EMC acquisition, and it will struggle to innovate its way out of that situation. What’s more, Dell has to wait on any decision until September 2021 when it can move some or all of VMware tax-free, five years after the EMC acquisition closed.

“While Dell can juggle finances, it cannot master innovation. The company’s cloud strategy is only working on a shrinking market and that ain’t easy to execute and grow on. So yeah, next year makes sense after the five year tax free thing kicks in,” he said.

In between the spreadsheets

VMware is worth $63.9 billion today, while Dell is valued at a far more modest $38.9 billion, according to Yahoo Finance data. But beyond the fact that the companies’ market caps differ, they are also quite different in terms of their ability to generate profit.

Looking at their most recent quarters each ending May 1, 2020, Dell turned $21.9 billion in revenue into just $143 million in net income after all expenses were counted. In contrast, VMware generated just $2.73 billion in revenue, but managed to turn that top line into $386 million worth of net income.

So, VMware is far more profitable than Dell from a far smaller revenue base. Even more, VMware grew more last year (from $2.45 billion to $2.73 billion in revenue in its most recent quarter) than Dell, which shrank from $21.91 billion in Q1 F2020 revenue to $21.90 billion in its own most recent three-month period.

VMware also has growing subscription software (SaaS) revenues. Investors love that top line varietal in 2020, having pushed the valuation of SaaS companies to new heights. VMware grew its SaaS revenues from $411 million in the year-ago period to $572 million in its most recent quarter. That’s not rocketship growth mind you, but the business category was VMware’s fastest growing segment in percentage and gross dollar terms.

So VMware is worth more than Dell, and there are some understandable reasons for the situation. Why wouldn’t Dell sell some VMware to lower its debts if the market is willing to price the virtualization company so strongly? Heck, with less debt perhaps Dell’s own market value would rise.

It’s all about that debt

Almost four years after the deal closed, Dell is still struggling to figure out how to handle all the debt, and in a weak economy, that’s an even bigger challenge now. At some point, it would make sense for Dell to cash in some of its valuable chips, and its most valuable one is clearly VMware.

Nothing is imminent because of the five year tax break business, but could something happen? September 2021 is a long time away, and a lot could change between now and then, but on its face, VMware offers a good avenue to erase a bunch of that outstanding debt very quickly and get Dell on much firmer financial ground. Time will tell if that’s what happens.


By Ron Miller

Suse launches version 2.0 of its Cloud Foundry-based Cloud Application Platform

Suse, the well-known German open-source company that went through more corporate owners than anybody can remember until it finally became independent again in 2019, has long been a champion of Cloud Foundry, the open-source platform-as-a-service project. And while you may think of Suse as a Linux distribution, today’s company also offers a number of other services, including a container platform, DevOps tools and the Suse Cloud Application Platform, based on Cloud Foundry. Today, right in time for the bi-annual (and now virtual) Cloud Foundry Summit, the company announced the launch of version 2.0 of this platform.

The promise of the Application Platform, and indeed Cloud Foundry, is that it allows for one-step application deployments and an enterprise-ready platform to host them.

The marquee feature of version 2.0 is that it now includes a new Kubernetes Operator, a standard way of packaging, deploying and managing container-based applications, which makes deploying and managing Cloud Foundry on Kubernetes infrastructure easier.

Suse President of Engineering and Innovation Thomas Di Giacomo also notes that it’s now easier to “install, operate and maintain on Kubernetes platforms anywhere — on premises and in public clouds,” and that it opens up a new path for existing Cloud Foundry users to move to a modern container-based architecture. Indeed, for the last few years, Suse has been crucial to bringing both Kubernetes support to Cloud Foundry and Cloud Foundry to Kubernetes.

Cloud Foundry, it’s worth noting, long used its home-grown container orchestration tool, which the community developed before anybody had even heard of Kubernetes. Over the course of the last few years, though, Kubernetes became the de facto standard for container management, and today, Cloud Foundry supports both its own Diego tool and Kubernetes.

Suse Cloud Application Platform 2.0 builds on and advances those efforts, incorporating several upstream technologies recently contributed by Suse to the Cloud Foundry Community,” writes Di Giacomo. “These include KubeCF, a containerized version of the Cloud Foundry Application Runtime designed to run on Kubernetes, and Project Quarks, a Kubernetes operator for automating deployment and management of Cloud Foundry on Kubernetes.”


By Frederic Lardinois

Cloud Foundry gets an updated CLI to make life easier for enterprise developers

The Cloud Foundry Foundation, the nonprofit behind the popular open-source enterprise platform-as-a-service project, is holding its developer conference today. What’s usually a bi-annual community gathering (traditionally one in Europe and one in North America) is now a virtual event, but there’s still plenty of news from the Summit, both from the organization itself and from the wider ecosystem.

After going through a number of challenging technical changes in order to adapt to the new world of containers and DevOps, the organization’s focus these days is squarely on improving the developer experience around Cloud Foundry (CF). The promise of CF, after all, has always been that it would make life easier for enterprise developers (assuming they follow the overall CF processes).

“There are really two areas of focus that our community has: number one, re-platform on Kubernetes. No major announcements about that. […] And then the secondary focus is continuing to evolve our developer experience,” Chip Childers, the executive director of the Cloud Foundry Foundation, told me ahead of today’s announcements.

At the core of the CF experience is its “cf” command-line interface(CLI). With today’s update, this is getting a number of new capabilities, mostly with an eye to giving developers more flexibility to support their own workflows.

“The cf CLI v7 was made possible through the tremendous work of a diverse, distributed group of collaborators and committers,” said Josh Collins, Cloud Foundry’s CLI project lead and senior product manager at VMware. “Modern development techniques are much simpler with Cloud Foundry as a result of the new CLI, which abstracts away the nuances of the CF API into a command-line interface that’s easy and elegant to use.”

Built on top of CF’s v3 APIs, which have been in the making for a while, the new CLI enables features like rolling app deployments for example, to allow developers to push updates without downtime. “Let’s say you have a number of instances of the application out there and you want to slowly roll instance by instance to perform the upgrade and allow traffic to be spread across both new and old versions,” explained Childers. “Being able to do that with just a simple command is a very powerful thing.”

Developers can also now run sub-steps of their “cf -push” processes. With this, they get more granular control over their deployments (“cf -push” is the command for deploying a CF application) and they now get the ability to push apps that run multiple processes, maybe for a UI process and a worker process.

In the overall Cloud Foundry ecosystem, things continue at their regular pace, with EngineerBetter, for example, joining the Cloud Foundry Foundation as a new member, Suse updating its Cloud Application Platform and long-time CF backers like anynines, Atos and Grape Up updating their respective CF-centric platforms, too. Stark & Wayne, which has long offered a managed CF solution, too, is launching new support options with the addition of college-style advisory sessions and an update to its Kubernetes-centric Gluon controller for CF deployments.


By Frederic Lardinois

Ampere announces latest chip with a 128 core processor

In the chip game, more is usually better and to that end, Ampere announced the next chip on its product roadmap today, the Altra Max, a 128 core processor, the company says is designed specifically to handle cloud native, containerized workloads.

What’s more, the company has designed the chip, so that it will fit in the same slot as their 80 core product announced last year (and in production now). That means that engineers can use the same slot when designing for the new chip, which saves engineering time and eases production, says Jeff Wittich, vp of products at the company,

Wittich says that his company is working with manufacturers today to make sure they can build for all of the requirements for the more powerful chip. “The reason we’re talking about it now, versus waiting until Q4 when we’ve got samples going out the door is because it’s socket compatible, so the same platforms that the Altra 80 core go into, this 128 core product can go into,” he said.

He says that containerized workloads, video encoding, large scale out databases and machine learning inference will all benefit from having these additional cores.

While he wouldn’t comment on any additional funding, the company has raised $40 million, according to Crunchbase data, and Wittich says they have enough funding to to into high-volume production later this year on their existing products.

Like everyone, the company has faced challenges keeping a consistent supply chain throughout the pandemic, but when it started to hit in Asia at the beginning of this year, the company set a plan in motion to find backup suppliers for the parts they would need should they run into pandemic-related shortages. He says that it took a lot of work, planning and coordination, but they feel confident at this point in being able to deliver their products in spite of the uncertainty that exists.

“Back in January we actually already went through [our list of suppliers], and we diversified our supply chain and made sure that we had options for everything. So we were able to get in front of that before it ever became a problem,” he said.

“We’ve had normal kinds of hiccups here and there that everyone’s had in the supply chain, where things get stuck in shipping and they end up a little bit late, but we’re right on schedule with where we were.”

The company is already planning ahead for its 2022 release, which is already in development.
“We’ve got a test chip running through five nanometer right now that has the key IP and some of the key features of that product, so that we can start testing those out in silicon pretty soon,” he said.

Finally, the company announced that it’s working with some new partners including Cloudflare, Packet (which was acquired by Equinix in January) Scaleway and Phoenics Electronics, a division of Avnet. These partnerships provide another way for Ampere to expand its market as it continues to develop.

The company was founded in 2017 by former Intel president Renee James.


By Ron Miller

Salesforce introduces several new developer tools including serverless functions

Salesforce has a bunch of announcements coming out of the virtual Trailheadx conference taking place later this week, starting today with some new developer tools. The goal of these tools is to give developers a more modern way of creating applications on top of the Salesforce platform.

Perhaps the most interesting of the three being announced today is Salesforce Functions, which enable developers to build serverless applications on top of Salesforce. With a serverless approach, the developer creates a series of functions that trigger an operation. The cloud provider then delivers the exact amount of infrastructure resources required to run that operation and nothing more.

Wade Wegner, VP of product for Salesforce and Salesforce DX, says the Salesforce offering gives developers a lot of flexibility around development languages such as Node.js or Java, and cloud platforms such as AWS or Azure. “I can just write my code, deploy it and let Salesforce operate it for me,” he said.

Wegner explained that the new approach lets developers build serverless applications with data that lives in Salesforce, and then run it on elastic infrastructure. This gives them the benefits of vertical and horizontal scale without having to be responsible for managing all aspects of how their application will run on the cloud infrastructure.

In addition to Functions, the company is also announcing Code Builder, a web-based IDE based on Microsoft Visual Studio Code Spaces. “By leveraging Visual Studio Code Spaces we can bring the same capabilities to developers right in the browser, ” Wegner said.

He adds that this enables them to be more productive with support for many languages and frameworks in a browser in the context of the environment that they’re doing their work, while giving them a consistent and familiar experience.

Finally, the company is announcing the DevOps Center, which is a place to manage the growing complexity of delivering applications built on top of Salesforce in a modern continuous way. “It is really meant to provide new ways with which teams of developers can collaborate around the work that they’re doing, and to manage the complexities of continuously delivering applications…,” he said.

As is typical for Salesforce, the company is announcing these tools today, but they will not be generally available for some time. Functions and Code Builders are both in pilot, while DevOps Center will be available as a developer preview later this year.


By Ron Miller

Loodse becomes Kubermatic and open sources Kubernetes automation platform

Loodse, a German Kubernetes automation platform, announced today that it was rebranding as Kubermatic. While it was at it, the company also announced that it was open sourcing its Kubermatic Kubernetes Platform as open source under the Apache 2.0 License.

Co-founder Sebastian Scheele says that his company’s Kubernetes solution can provision clusters and applications on any cloud, as well in a datacenter running, for example OpenStack or VMware. What’s more, it can do it much faster by automating much of the operations side of running Kubernetes clusters.

“We wanted to really have a cloud native way to run and manage Kubernetes. And so it’s running the Kubernetes master itself, which is completely containerized on top of Kubernetes, rather than being run on VMs. This helps provide you with better scalability, but also because it’s running on Kubernetes, we get all of the resilience and auto scaling out of Kubernetes itself,” Scheele told TechCrunch.

He says that he and his co-founder Julian Hansert have always had a strong commitment to open source, and offering Kubermatic platform under the Apache 2.0 license is a way to show that to the community. “One of the big [things] we can bring to the table is making Kubermatic completely open source, while following the Open-core model, and having a strong commitment to open source to the world and also to the community,” he said.

Image Credit: Kubermatic

As for why it’s rebranding, he says that the original company name is a German word that means navigation pilot for a ship. The name is a nod to its Hamburg base, which is a hub for container ships. It makes sense to Germans, but not others, so they wanted a name that more broadly reflected what the company does.

“Now that we are open sourcing Kubermatic, we also thought that people should understand our vision and what’s our DNA. It’s Kubernetes automation, helping our customers to really save money on Kubernetes operations by automating as much as possible on the operation level, so our users can really focus on building new applications,” he explained.

The company launched 4 years ago and has taken no funding, completely bootstrapping along the way. It’s worth noting it was of the top 5 committers to the open source Kubernetes project in 2019 along with much bigger names including Google, VMware, Red Hat and Microsoft.

Today the company has 50 employees most of whom are working remotely by choice, rather than due to the pandemic. In fact the company has employees working in 10 different countries. He says that has allowed him to work with people with a broad set of skills, who don’t necessarily live in Hamburg where he and Hansert are based.


By Ron Miller

Uptycs lands $30M Series B to keep building security analytics platform

Every company today is struggling to deal with security and understanding what is happening on their systems. This is even more pronounced as companies have had to move their employees to work from home. Uptycs, a Boston-area security analytics startup, announced a $30 million Series B today to help companies to detect and understand breaches when they happen.

Sapphire Ventures led the round with help from Comcast Ventures and ForgePoint Capital. The startup has now raised a total of $43 million, according to the company. Under the terms of today’s deal Sapphire Ventures’ president and managing director Jai Das will be joining the company’s board.

Company co-founder and CEO Ganesh Pai says he and his co-founders previously worked at Akamai, where they observed Akamai’s debugging and diagnostic tools, which were designed to work at massive scale. The founders believed they could use a similar approach to building a security analytics platform, and in 2016 the group launched Uptycs.

“We help people to solve intrusion detection, compliance and audit and incident investigation. These are table stakes requirements [for security solutions] that most large scale organizations have, and of course with their scale the challenges vary. What we at Uptycs do is provide a solution for that,” Pai told TechCrunch.

The company uses a flight recorder approach to security, giving security operations teams the ability to sift through the data and review exactly how a detection happened and how the intruder got through the company’s defenses.

He recognizes his company is fortunate to get a round this large right now, but he says the solution has attracted a number of customers signing seven-digit contracts and this in turn got the attention of investors. “That customer engagement, their experience and this commitment from our customers led to this substantial round of funding,” he said.

The company currently has 65 employees spread across offices in Waltham, a Boston suburb, as well as two offices in India. Pai says the plan is to double that number in the next 12 months. “Between the cash flow from our existing customers and the pipeline for us and the funding, we are planning to grow in a meaningful way. If everything aligns with our expectation we will double our team size in the next 12 months,” he said.

As he grows his company in this way, Pai says they are talking to their investors about how to build a diverse workforce. “We’ve thought long and hard about it, both in terms of diversity and inclusion. It is a lot harder to execute because at the end of the day, there is a finite talent pool, but we are having conversations with our investors, who have seen patterns of success in terms of implementing such plans from growth stage ventures,” he said.

He added, “And of course we are a very early stage company, but we are extremely cognizant, and given the current circumstances are acutely aware that we need to do our very best and make a difference.”

As the company has moved to work from home across its operations, he says it has benefited from working in the cloud from the start. “As an organization we are very fortunate that we built our organization so that everything runs in the cloud and everyone has been able to remain very productive,” he said.


By Ron Miller

Google Cloud launches Filestore High Scale, a new storage tier for high-performance computing workloads

Google Cloud today announced the launch of Filestore High Scale, a new storage option — and tier of Google’s existing Filestore service — for workloads that can benefit from access to a distributed high-performance storage option.

With Filestore High Scale, which is based on technology Google acquired when it bought Elastifile in 2019, users can deploy shared file systems with hundreds of thousands of IOPS, 10s of GB/s of throughput and at a scale of 100s of TBs.

“Virtual screening allows us to computationally screen billions of small molecules against a target protein in order to discover potential treatments and therapies much faster than traditional experimental testing methods,” says Christoph Gorgulla, a postdoctoral research fellow at Harvard Medical School’s Wagner Lab., which already put the new service through its paces. “As researchers, we hardly have the time to invest in learning how to set up and manage a needlessly complicated file system cluster, or to constantly monitor the health of our storage system. We needed a file system that could handle the load generated concurrently by thousands of clients, which have hundreds of thousands of vCPUs.”

The standard Google Cloud Filestore service already supports some of these use cases, but the company notes that it specifically built Filestore High Scale for high-performance computing (HPC) workloads. In today’s announcement, the company specifically focuses on biotech use cases around COVID-19. Filestore High Scale is meant to support tens of thousands of concurrent clients, which isn’t necessarily a standard use case, but developers who need this kind of power can now get it in Google Cloud.

In addition to High Scale, Google also today announced that all Filestore tiers now offer beta support for NFS IP-based access controls, an important new feature for those companies that have advanced security requirements on top of their need for a high-performance, fully managed file storage service.


By Frederic Lardinois

New Box tools should help ease creation of digitally driven workflows

As COVID-19 has forced companies to move employees from office to home, cloud services have seen a burst in business. Box has been speeding up its product roadmap to help companies who are in the midst of this transition. Today, the company announced the Box Relay template library, which includes a series of workflow templates to help customers build digital workflows faster.

Box CEO Aaron Levie says that the rapid shift to work from home has been a massive accelerant to digital transformation, in some cases driving years of digital transformation into a matter of weeks and months. He says that has made the need to digitize business processes more urgent than ever.

In fact, when he appeared on Extra Crunch Live last month, he indicated that businesses still have way too many manual processes:

We think we’re [in] an environment that anything that can be digitized probably will be. Certainly as this pandemic has reinforced, we have way too many manual processes in businesses. We have way too slow ways of working together and collaborating. And we know that we’re going to move more and more of that to digital platforms.

Box Relay is the company’s workflow tool, and while it has had the ability to create workflows, it required a certain level of knowledge and way of thinking to make that happen. Levie says that they wanted to make it as simple as possible for customers to build workflows to digitize manual processes.

“We are announcing an all new set of Box Relay templates, which are going straight to the heart of how do you automate and digitize business processes across the entire enterprise and make it really simple to do that,” he explained.

This could include things like a contract review, change order process or budget review to name a few examples. The template includes the pieces to get going, but the customer can customize the process to meet the needs of the individual organization’s requirements.

Image Credits: Box

While this is confined to Box-built templates for now, Levie says that down the road this could include the ability for customers to deploy templates of their own, or even for third parties like systems integrators to build industry or client-specific templates. But for today, it’s just about the ones you get out of the box from Box.

At the same time, the company is announcing the File Request feature, a name Levie admits doesn’t really do the feature justice. The idea is that in a workflow such as a paperless bank loan process, the individual has to submit multiple documents without having a Box account. After the company receives the documents, it can kick off a workflow automatically based on receiving the set of documents.

He says the combination of these two new capabilities will give customers the ability to digitize more and more of their processes and bring in a level of automation that wasn’t previously possible in Relay. “The combination of these two features is about driving automation across the entire enterprise and digitizing many more paper-based and manual processes in the enterprise,” Levie said.

Box will not be charging additional fees for these new features to customers using Box Relay. File Request should be available at the end of this month, while the template library should be available by the end of July, according to the company.


By Ron Miller

How Liberty Mutual shifted 44,000 workers from office to home

In a typical month, an IT department might deal with a small percentage of employees working remotely, but tracking a few thousand employees is one thing — moving an entire company offsite requires next-level planning.

To learn more about how large organizations are adapting to the rapid shift to working from home, we spoke to Liberty Mutual CIO James McGlennon, who helped orchestrate his company’s move about the challenges he faced as he shifted more than 44,000 employees in a variety of jobs, locations, cultures and living situations from office to home in short order.

Laying the groundwork

Insurance company Liberty Mutual is headquartered in the heart of Boston, but the company has offices in 29 countries. While some staffers in parts of Asia and Europe were sent home earlier in the year, by mid-March the company had closed all of its offices in the U.S. and Canada, eventually sending every employee home.

McGlennon said he never imagined such a situation, but the company saw certain networking issues in recent years that gave them an inkling of what it might look like. That included an unexpected incident in which two points on a network ring around one of its main data centers went down in quick succession, first because a backhoe hit a line, and then at another point because someone stole the fiber-optic cable.

That got the CIO and his team thinking about how to respond to worst cases. “We certainly hadn’t contemplated needing to get 44,000 people working from home or working remotely so quickly, but there have been a few things that have happened over the last few years that made me think,” he said.


By Ron Miller

Gauging growth in the most challenging environment in decades

Traditionally, measuring business success requires a greater understanding of your company’s go-to-market lifecycle, how customers engage with your product and the macro-dynamics of your market. But in the most challenging environment in decades, those metrics are out the window.

Enterprise application and SaaS companies are changing their approach to measuring performance and preparing to grow when the economy begins to recover. While there are no blanket rules or guidance that applies to every business, company leaders need to focus on a few critical metrics to understand their performance and maximize their opportunities. This includes understanding their burn rate, the overall real market opportunity, how much cash they have on hand and their access to capital. Analyzing the health of the company through these lenses will help leaders make the right decisions on how to move forward.

Play the game with the hand you were dealt. Earlier this year, our company closed a $40 million Series C round of funding, which left us in a strong cash position as we entered the market slowdown in March. Nonetheless, as the impact of COVID-19 became apparent, one of our board members suggested that we quickly develop a business plan that assumed we were running out of money. This would enable us to get on top of the tough decisions we might need to make on our resource allocation and the size of our staff.

While I understood the logic of his exercise, it is important that companies develop and execute against plans that reflect their actual situation. The reality is, we did raise the money, so we revised our plan to balance ultra-conservative forecasting (and as a trained accountant, this is no stretch for me!) with new ideas for how to best utilize our resources based on the market situation.

Burn rate matters, but not at the expense of your culture and your talent. For most companies, talent is both their most important resource and their largest expense. Therefore, it’s usually the first area that goes under the knife in order to reduce the monthly spend and optimize efficiency. Fortunately, heading into the pandemic, we had not yet ramped up hiring to support our rapid growth, so were spared from having to make enormously difficult decisions. We knew, however, that we would not hit our 2020 forecast, which required us to make new projections and reevaluate how we were deploying our talent.


By Walter Thompson

OpenStack adds the StarlinkX edge computing stack to its top-level projects

The OpenStack Foundation today announced that StarlingX, a container-based system for running edge deployments, is now a top-level project. With this, it joins the main OpenStack private and public cloud infrastructure project, the Airship lifecycle management system, Kata Containers and the Zuul CI/CD platform.

What makes StarlingX a bit different from some of these other projects is that it is a full stack for edge deployments — and in that respect, it’s maybe more akin to OpenStack than the other projects in the foundation’s stable. It uses open-source components from the Ceph storage platform, the KVM virtualization solution, Kubernetes and, of course, OpenStack and Linux. The promise here is that StarlingX can provide users with an easy way to deploy container and VM workloads to the edge, all while being scalable, lightweight and providing low-latency access to the services hosted on the platform.

Early StarlingX adopters include China UnionPay, China Unicom and T-Systems. The original codebase was contributed to the foundation by Intel and Wind River System in 2018. Since then, the project has seen 7,108 commits from 211 authors.

“The StarlingX community has made great progress in the last two years, not only in building great open source software but also in building a productive and diverse community of contributors,” said Ildiko Vancsa, ecosystem technical lead at the OpenStack Foundation. “The core platform for low-latency and high-performance applications has been enhanced with a container-based, distributed cloud architecture, secure booting, TPM device enablement, certificate management and container isolation. StarlingX 4.0, slated for release later this year, will feature enhancements such as support for Kata Containers as a container runtime, integration of the Ussuri version of OpenStack, and containerization of the remaining platform services.”

It’s worth remembering that the OpenStack Foundation has gone through a few changes in recent years. The most important of these is that it is now taking on other open-source infrastructure projects that are not part of the core OpenStack project but are strategically aligned with the organization’s mission. The first of these to graduate out of the pilot project phase and become top-level projects were Kata Containers and Zuul in April 2019, with Airship joining them in October.

Currently, the only pilot project for the OpenStack Foundation is its OpenInfra Labs project, a community of commercial vendors and academic institutions, including the likes of Boston University, Harvard, MIT, Intel and Red Hat, that are looking at how to better test open-source code in production-like environments.

 


By Frederic Lardinois