Microsoft acquires Mover to help with Microsoft 365 cloud migration

Microsoft wants to make it as easy as possible to migrate to Microsoft 365, and today the company announced it had purchased a Canadian startup called Mover to help. The companies did not reveal the acquisition price.

Microsoft 365 is the company’s bundle that includes Office 365, Microsoft Teams, security tools and workflow. The idea is to provide customers with a soup-to-nuts, cloud-based productivity package. Mover helps customers get files from another service into the Microsoft 365 cloud.

As Jeff Tepper wrote in a post on the Official Microsoft Blog announcing the acquisition, this about helping customers get to the Microsoft cloud as quickly and smoothly as possible. “Today, Mover supports migration from over a dozen cloud service providers — including Box, Dropbox, Egnyte, and Google Drive — into OneDrive and SharePoint, enabling seamless file collaboration across Microsoft 365 apps and services, including the Office apps and Microsoft Teams,” Tepper wrote.

Tepper also points out that they will be gaining the expertise of the Mover team as it moves to Microsoft and helps add to the migration tools already in place.

Tony Byrne, founder and principal analyst at Real Story Group, says that moving files from one system to another like this can be extremely challenging regardless of how you do it, and the file transfer mechanism is only part of it. “The transition to 365 from an on-prem system or competing cloud supplier is never a migration, per se. It’s a rebuild, with a completely different UX, admin model, set of services, and operational assumptions all built into the Microsoft cloud offering,” Byrne explained.

Mover is based in Calgary, Canada. It was founded in 2012 and raised $1 million, according to Crunchbase data. It counts some big clients as customers including AutoDesk, Symantec and BuzzFeed.


By Ron Miller

Veteran enterprise exec Bob Stutz is heading back to SAP

Bob Stutz has had a storied career with enterprise software companies including stints at Siebel Systems, SAP, Microsoft and Salesforce. He announced on Facebook last week that he’s leaving his job as head of the Salesforce Marketing Cloud and heading back to SAP as president of customer experience.

Bob Stutz Facebook announcement

Bob Stutz Facebook announcement

Constellation Research founder and principal analyst Ray Wang says that Stutz has a reputation for taking companies to the next level. He helped put Microsoft CRM on the map (although it still had just 2.7% marketshare in 2018, according to Gartner) and he helped move the needle at Salesforce Marketing Cloud.

Bob Stutz

Bob Stutz, SAP’s new president of customer experience. Photo: Salesforce

“Stutz was the reason Salesforce could grow in the Marketing Cloud and analytics areas. He fixed a lot of the fundamental architectural and development issues at Salesforce, and he did most of the big work in the first 12 months. He got the acquisitions going, as well,” Wang told TechCrunch. He added, “SAP has a big portfolio from CallidusCloud to Hybris to Qualtrics to put together. Bob is the guy you bring in to take a team to the next level.”

Brent Leary, who is a long-time CRM industry watcher, says the move makes a lot of sense for SAP. “Having Bob return to head up their Customer Experience business is a huge win for SAP. He’s been everywhere, and everywhere he’s been was better for it. And going back to SAP at this particular time may be his biggest challenge, but he’s the right person for this particular challenge,” Leary said.

Screenshot 2019 10 21 09.15.45

The move comes against the backdrop of lots of changes going on at the German software giant. Just last week, long-time CEO Bill McDermott announced he was stepping down, and that Jennifer Morgan and Christian Klein would be replacing him as co-CEOs. Earlier this year, the company saw a line of other long-time executives and board members head out the door including including SAP SuccessFactors COO Brigette McInnis-Day, Robert Enslin, president of its cloud business and a board member, CTO Björn Goerke and Bernd Leukert, a member of the executive board.

Having Stutz on board could help stabilize the situation somewhat, as he brings more than 25 years of solid software company experience to bear on the company.


By Ron Miller

Pendo scores $100M Series E investment on $1 billion valuation

Pendo, the late stage startup that helps companies understand how customers are interacting with their apps, announced a $100 million Series E investment today on a valuation of $1 billion.

The round was led by Sapphire Ventures . Also participating were new investors General Atlantic and Tiger Global, and existing investors Battery Ventures, Meritech Capital, FirstMark, Geodesic Capital and Cross Creek. Pendo has now raised $206 million, according to the company.

Company CEO and co-founder Todd Olson says that one of the reasons they need so much money is they are defining a market, and the potential is quite large. “Honestly, we need to help realize the total market opportunity. I think what’s exciting about what we’ve seen in six years is that this problem of improving digital experiences is something that’s becoming top of mind for all businesses,” Olson said.

The company integrates with customer apps, capturing user behavior and feeding data back to product teams to help prioritize features and improve the user experience. In addition, the product provides ways to help those users either by walking them through different features, pointing out updates and new features or providing other notes. Developers can also ask for feedback to get direct input from users.

Olson says early on its customers were mostly other technology companies, but over time they have expanded into lots of other verticals including insurance, financial services and retail and these companies are seeing digital experience as increasingly important. “A lot of this money is going to help grow our go-to-market teams and our product teams to make sure we’re getting our message out there, and we’re helping companies deal with this transformation,” he says. Today, the company has over 1200 customers.

While he wouldn’t commit to going public, he did say it’s something the executive team certainly thinks about, and it and has started to put the structure in place to prepare should that time ever come. “This is certainly an option that we are considering, and we’re looking at ways in which to put us in a position to be able to do so, if and when the markets are good and we decide that’s the course we want to take.”


By Ron Miller

Zoho launches Catalyst, a new developer platform with a focus on microservices

Zoho may be one of the most underrated tech companies. The 23-year-old company, which at this point offers more than 45 products, has never taken outside funding and has no ambition to go public, yet it’s highly profitable and runs its own data centers around the world. And today, it’s launching Catalyst, a cloud-based developer platform with a focus on microservices that it hopes can challenge those of many of its larger competitors.

The company already offered a low-code tool for building business apps. But Catalyst is different. Zoho isn’t following in the footsteps of Google or Amazon here and offering a relatively unopinionated platform for running virtual machines and containers. Indeed, it does nothing of the sort. The company is 100% betting on serverless as the next major technology for building enterprise apps and the whole platform has been tuned for this purpose.

Catalyst Zia AI

“Historically, when you look at cloud computing, when you look at any public clouds, they pretty much range from virtualizing your servers and renting our virtual servers all the way up the stack,” Raju Vegesna, Zoho’s chief evangelist, said when I asked him about this decision to bet on serverless. “But when you look at it from a developer’s point of view, you still have to deal with a lot of baggage. You still have to figure out the operating system, you still have to figure out the database. And then you have to scale and manage the updates. All of that has to be done at the application infrastructure level.” In recent years, though, said Vegesna, the focus has shifted to the app logic side, with databases and file servers being abstracted away. And that’s the trend Zoho is hoping to capitalize on with Catalyst.

What Catalyst does do is give advanced developers a platform to build, run and manage event-driven microservice-based applications that can, among other things, also tap into many of the tools that Zoho built for running its own applications, like a grammar checker for Zoho Writer, document previews for Zoho Drive or access to its Zia AI tools for OCR, sentiment analysis and predictions. The platform gives developers tools to orchestrate the various microservices, which obviously means it’ll make it easy to scale applications as needed, too. It integrates with existing CI/CD pipelines and IDEs.

Catalyst Functions

Catalyst also complies with the SOC Type II and ISO 27001 certifications, as well as GDPR. It also offers developers the ability to access data from Zoho’s own applications, as well as third-party tools, all backed by Zoho’s Unified Data Model, a relational datastore for server-side and client deployment.

“The infrastructure that we built over the last several years is now being exposed,” said Vegesna. He also stressed that Zoho is launching the complete platform in one go (though it will obviously add to it over time). “We are bringing everything together so that you can develop a mobile or web app from a single interface,” he said. “We are not just throwing 50 different disparate services out there.” At the same time, though, the company is also opting for a very deliberate approach here with its focus on serverless. That, Vegesna believes, will allow Zoho Catalyst to compete with its larger competitors.

It’s also worth noting that Zoho knows that it’s playing the long-game here, something it is familiar with, given that it launched its first product, Zoho Writer, back in 2005 before Google had launched its productivity suite.

Catalyst Homepage

 


By Frederic Lardinois

Databricks brings its Delta Lake project to the Linux Foundation

Databricks, the big data analytics service founded by the original developers of Apache Spark, today announced that it is bringing its Delta Lake open-source project for building data lakes to the Linux Foundation and under an open governance model. The company announced the launch of Delta Lake earlier this year and even though it’s still a relatively new project, it has already been adopted by many organizations and has found backing from companies like Intel, Alibaba and Booz Allen Hamilton.

“In 2013, we had a small project where we added SQL to Spark at Databricks […] and donated it to the Apache Foundation,” Databricks CEO and co-founder Ali Ghodsi told me. “Over the years, slowly people have changed how they actually leverage Spark and only in the last year or so it really started to dawn upon us that there’s a new pattern that’s emerging and Spark is being used in a completely different way than maybe we had planned initially.”

This pattern, he said, is that companies are taking all of their data and putting it into data lakes and then do a couple of things with this data, machine learning and data science being the obvious ones. But they are also doing things that are more traditionally associated with data warehouses, like business intelligence and reporting. The term Ghodsi uses for this kind of usage is ‘Lake House.’ More and more, Databricks is seeing that Spark is being used for this purpose and not just to replace Hadoop and doing ETL (extract, transform, load). “This kind of Lake House patterns we’ve seen emerge more and more and we wanted to double down on it.”

Spark 3.0, which is launching today, enables more of these use cases and speeds them up significantly, in addition to the launch of a new feature that enables you to add a pluggable data catalog to Spark.

Data Lake, Ghodsi said, is essentially the data layer of the Lake House pattern. It brings support for ACID transactions to data lakes, scalable metadata handling, and data versioning, for example. All the data is stored in the Apache Parquet format and users can enforce schemas (and change them with relative ease if necessary).

It’s interesting to see Databricks choose the Linux Foundation for this project, given that its roots are in the Apache Foundation. “We’re super excited to partner with them,” Ghodsi said about why the company chose the Linux Foundation. “They run the biggest projects on the planet, including the Linux project but also a lot of cloud projects. The cloud-native stuff is all in the Linux Foundation.”

“Bringing Delta Lake under the neutral home of the Linux Foundation will help the open source community dependent on the project develop the technology addressing how big data is stored and processed, both on-prem and in the cloud,” said Michael Dolan, VP of Strategic Programs at the Linux Foundation. “The Linux Foundation helps open source communities leverage an open governance model to enable broad industry contribution and consensus building, which will improve the state of the art for data storage and reliability.”


By Frederic Lardinois

Amazon migrates more than 100 consumer services from Oracle to AWS databases

AWS and Oracle love to take shots at each other, but as much as Amazon has knocked Oracle over the years, it was forced to admit that it was in fact a customer. Today in a company blog post, the company announced it was shedding Oracle for AWS databases, and had effectively turned off its final Oracle database.

The move involved 75 petabytes of internal data stored in nearly 7,500 Oracle databases, according to the company. “I am happy to report that this database migration effort is now complete. Amazon’s Consumer business just turned off its final Oracle database (some third-party applications are tightly bound to Oracle and were not migrated),” AWS’s Jeff Barr wrote in the company blog post announcing the migration.

Over the last several years, the company has been working to move off of Oracle databases, but it’s not an easy task to move projects on Amazon scale. Barr wrote there were lots of reasons the company wanted to make the move. “Over the years we realized that we were spending too much time managing and scaling thousands of legacy Oracle databases. Instead of focusing on high-value differentiated work, our database administrators (DBAs) spent a lot of time simply keeping the lights on while transaction rates climbed and the overall amount of stored data mounted,” he wrote.

More than 100 consumer services have been moved to AWS databases including customer-facing tools like Alexa, Amazon Prime and Twitch among others. It also moved internal tools like AdTech, its fulfillment system, external payments and ordering. These are not minor matters. They are the heart and soul of Amazon’s operations.

Each team moved the Oracle database to an AWS database service like Amazon DynamoDB, Amazon Aurora, Amazon Relational Database Service (RDS), and Amazon Redshift. Each group was allowed to choose the service they wanted, based on its individual needs and requirements.

 


By Ron Miller

Top VCs, founders share how to build a successful SaaS company

Last week at TechCrunch Disrupt in San Francisco, we hosted a panel on the Extra Crunch stage on “How to build a billion-dollar SaaS company.” A better title probably would have been “How to build a successful SaaS company.”

We spoke to Whitney Bouck, COO at HelloSign; Jyoti Bansal, CEO and founder at Harness, and Neeraj Agrawal, a partner at Battery Ventures to get their view on how to move through the various stages to build that successful SaaS company.

While there is no magic formula, we covered a lot of ground, including finding a product-market fit, generating early revenue, the importance of building a team, what to do when growth slows and finally, how to resolve the tension between growth and profitability.

Finding product-market fit

Neeraj Agrawal: When we’re talking to the market, what we’re really looking for is a repeatable pattern of use cases. So when we’re talking to prospects — the words they use, the pain point they use — are very similar from call to call to call? Once we see that pattern, we know we have product-market fit, and then we can replicate that.

Jyoti Bansal: Revenue is one measure of product-market fit. Are customers adopting it and getting value out of it and renewing? Until you start getting a first set of renewals and a first set of expansions and happy successful customers, you don’t really have product-market fit. So that’s the only way you can know if the product is really working or not.

Whitney Bouck: It isn’t just about revenue — the measures of success at all phases have to somewhat morph. You’ve got to be looking at usage, at adoption, value renewals, expansion, and of course, the corollary, churn, to give you good health indicators about how you’re doing with product-market fit.

Generating early revenue

Jyoti Bansal: As founders we’ve realized, getting from idea to early revenue is one of the hardest things to do. The first million in revenue is all about street fighting. Founders have to go out there and win business and do whatever it takes to get to revenue.

As your revenue grows, what you focus on as a company changes. Zero to $1 million, your goal is to find the product-market fit, do whatever it takes to get early customers. One million to $10 million, you start scaling it. Ten million to $75 million is all about sales, execution, and [at] $75 million plus, the story changes to how do you go into new markets and things like that.

Whitney Bouck: You really do have to get that poll from the market to be able to really start the momentum and growth. The freemium model is one of the ways that we start to engage people — getting visibility into the product, getting exposure to the product, really getting people thinking about, and frankly, spreading the word about how this product can provide value.

48833421487 5933a39235 k

Photo: Kimberly White/Getty Images for TechCrunch

 


By Ron Miller

Clari snags $60M Series D on valuation of around $500M

Clari uses AI to help companies find key information like the customers most likely to convert, the state of orders in the sales process or the next big sources of revenue. As its revenue management system continues to flourish, the company announced a $60 million Series D investment today.

Sapphire Ventures led the round with help from new-comer Madrona Venture Group and existing investors Sequoia Capital, Bain Capital Ventures and Tenaya Capital. Today’s investment brings the total raised to $135 million, according to the company.

The valuation, which CEO and co-founder Andy Byrne pegged at around a half a billion, appears to be hefty raise from what the company was likely valued at in 2018 after its $35 million Series C. As TechCrunch’s Ingrid Lunden wrote at the time:

“For some context, Clari, according to Pitchbook, had a relatively modest post-money valuation of $83.5 million in its last round in 2014, so my guess is that it’s now comfortably into hundred-million territory, once you add in this latest $35 million,” Lunden wrote.

Byrne says the company wasn’t even really looking for a new round, but when investors came knocking, he couldn’t refuse. “On the fundraise side, what’s really interesting is how this whole thing went down. We weren’t out looking, but we had a massive amount of interest from a lot of firms. We decided to engage, and we got it done in less than three weeks, which the board was kind of blown away by,” Byrne told TechCrunch.

What’s motivating these companies to invest is that Clari is helping to define this revenue operations category, and has attracted companies like Okta, Zoom and Qualtrics as customers. What they are providing is this AI-fueled way to see where the best sales opportunities are to drive revenue, and that’s what every company is looking for. At the same time, Byrne says that he’s moving companies away from a spreadsheet-driven record keeping system, and enabling them to see the all of the data in one place.

“Clari is allowing a rep to really understand where they should spend time, automating a lot of things for them to close deals faster, while giving managers new insights they’ve never had before to allow them to drive more revenue. And then we’re getting them out of ‘Excel hell.’ They’re no longer in these spreadsheets. They’re in Clari, and have more predictability in their forecasting,” he said.

Clari was founded in 2012 and is headquartered in Sunnyvale, CA. It has over 300 customers and just passed the 200 employee mark, a number that should increase as the company uses this money to begin to accelerate growth and expand the product’s capabilities.


By Ron Miller

Salesforce adds integrated order management system to its arsenal

Salesforce certainly has a lot of tools crossing the sales, service and marketing categories, but until today when it announced Lightning Order Management, it lacked an integration layer that allowed companies to work across these systems to manage orders in a seamless way.

“This is a new product built from the ground up on the Salesforce Lightning Platform to allow our customers to fulfill, manage and service their orders at scale,” Luke Ball, VP of product management at Salesforce told TechCrunch.

He says that order management is an often-overlooked part of the sales process, but it’s one that’s really key to the whole experience you’re trying to provide for your customers. “We think about advertising and acquisition and awareness. We think about creating amazing, compelling commerce experiences on the storefront or on your website or in your app. But I think a lot of brands don’t necessarily think about the delivery experience as part of that customer experience,” he said.

The problem is that order management involves so many different systems along with internal and external stakeholders. Trying to pull them together into a coherent system is harder than it looks, especially when it could also involve older legacy technology. As Ball pointed out, the process includes shipping carriers, warehouse management systems, ERP systems and payment and tax and fraud tools.

The Salesforce solution involves a few key pieces. For starters there is order lifecycle management, what Ball calls the brains of the operation. “This is the core logic of an order management system. Everything that extends commerce beyond the Buy button — supply chain management, order fulfillment, payment capture, invoice creation, inventory availability and custom business logic. This is the bread and butter of an order management system,” he said.

Lightning Order Management 7 LOM AppPicker bezel

Salesforce Lightning Order Management App Picker. Image: Salesforce

Customers start by building visual order workflows. They can move between systems in an App Picker, and the information is shared between Commerce Cloud and Service Cloud, so that as customers move from sales to service, the information moves with them and it makes it easier to process inquiries from customers about an order including returns.

Ball says that Salesforce recognizes that not every customer will be an all-Salesforce shop and the system is designed to work with tools from other vendors, although these external tools won’t show up in the App Picker. It also knows that this process involves external vendors like shipping companies, so they will be offering specific integration apps for Lightning Order Management in the Salesforce AppExchange.

The company is announcing the product today and will be making it generally available in February.


By Ron Miller

Okta wants to make every user a security ally

End users tend to get a bad rap in the security business because they are often the weakest security link. They fall for phishing schemes, use weak passwords and often unknowingly are the conduit for malicious actors getting into your company’s systems. Okta wants to change that by giving end users information about suspicious activity involving their login, while letting them share information with the company’s security apparatus when it makes sense.

Okta actually developed a couple of new products under the umbrella SecurityInsights. The end user product is called UserInsights. The other new product, called HealthInsights, is designed for administrators and makes suggestions on how to improve the overall identity posture of a company.

UserInsights lets users know when there is suspicious activity associated with their accounts such as a login from an unrecognized device. If it appears to involve a stolen password, he or she would click the Report button to report the incident to the company’s security apparatus where it would trigger an automated workflow to start an investigation. The person should also obviously change that compromised password.

HealthInsights operates in a similar fashion except for administrators at the system level. It checks the configuration parameters and makes sure the administrator has set up Okta according to industry best practices. When there is a gap between the company’s settings and a best practice, the system alerts the administrator and allows them to fix the problem. This could involve implementing a stricter password policy, creating a block list for known rogue IP addresses or forcing users to use a second factor for certain sensitive operations.

HealthInsight Completed tasks

Health Insights Report. Image: Okta

Okta is first and foremost an identity company. Organizations, large and small, can tap into Okta to have a single-sign-on interface where you can access all of your cloud applications in one place. “If you’re a CIO and you have a bunch of SaaS applications, you have a [bunch of] identity systems to deal with. With Okta, you narrow it down to one system,” CEO Todd McKinnon told TechCrunch.

That means, if your system does get spoofed, you can detect anomalous behavior much more easily because you’re dealing with one logon instead of many. The company developed these new products to take advantage of that, and provide these groups of employees with the information they need to help protect the company’s systems.

The SecurityInsights tools are available starting today.


By Ron Miller

Suse’s OpenStack Cloud dissipates

Suse, the newly independent open-source company behind the eponymous Linux distribution and an increasingly large set of managed enterprise services, today announced a bit of a new strategy as it looks to stay on top of the changing trends in the enterprise developer space. Over the course of the last few years, Suse put a strong emphasis on the OpenStack platform, an open-source project that essentially allows big enterprises to build something in their own data centers akin to the core services of a public cloud like AWS or Azure. With this new strategy, Suse is transitioning away from OpenStack . It’s ceasing both production of new versions of its OpenStack Cloud and sales of its existing OpenStack product.

“As Suse embarks on the next stage of our growth and evolution as the world’s largest independent open source company, we will grow the business by aligning our strategy to meet the current and future needs of our enterprise customers as they move to increasingly dynamic hybrid and multi-cloud application landscapes and DevOps processes,” the company said in a statement. “We are ideally positioned to execute on this strategy and help our customers embrace the full spectrum of computing environments, from edge to core to cloud.”

What Suse will focus on going forward are its Cloud Application Platform (which is based on the open-source Cloud Foundry platform) and Kubernetes-based container platform.

Chances are, Suse wouldn’t shut down its OpenStack services if it saw growing sales in this segment. But while the hype around OpenStack died down in recent years, it’s still among the world’s most active open-source projects and runs the production environments of some of the world’s largest companies, including some very large telcos. It took a while for the project to position itself in a space where all of the mindshare went to containers — and especially Kubernetes — for the last few years. At the same time, though, containers are also opening up new opportunities for OpenStack, as you still need some way to manage those containers and the rest of your infrastructure.

The OpenStack Foundation, the umbrella organization that helps guide the project, remains upbeat.

“The market for OpenStack distributions is settling on a core group of highly supported, well-adopted players, just as has happened with Linux and other large-scale, open-source projects,” said OpenStack Foundation COO Mark Collier in a statement. “All companies adjust strategic priorities from time to time, and for those distro providers that continue to focus on providing open-source infrastructure products for containers, VMs and bare metal in private cloud, OpenStack is the market’s leading choice.”

He also notes that analyst firm 451 Research believes there is a combined Kubernetes and OpenStack market of about $11 billion, with $7.7 billion of that focused on OpenStack. “As the overall open-source cloud market continues its march toward eight figures in revenue and beyond — most of it concentrated in OpenStack products and services — it’s clear that the natural consolidation of distros is having no impact on adoption,” Collier argues.

For Suse, though, this marks the end of its OpenStack products. As of now, though, the company remains a top-level Platinum sponsor of the OpenStack Foundation and Suse’s Alan Clark remains on the Foundation’s board. Suse is involved in some of the other projects under the OpenStack brand, so the company will likely remain a sponsor, but it’s probably a fair guess that it won’t continue to do so at the highest level.


By Frederic Lardinois

Nadella warns government conference not to betray user trust

Microsoft CEO Satya Nadella, delivering the keynote at the Microsoft Government Leaders Summit in Washington, DC today, had a message for attendees to maintain user trust in their tools technologies above all else.

He said it is essential to earn user trust, regardless of your business. “Now, of course, the power law here is all around trust because one of the keys for us, as providers of platforms and tools, trust is everything,” he said today. But he says it doesn’t stop with the platform providers like Microsoft. Institutions using those tools also have to keep trust top of mind or risk alienating their users.

“That means you need to also ensure that there is trust in the technology that you adopt, and the technology that you create, and that’s what’s going to really define the power law on this equation. If you have trust, you will have exponential benefit. If you erode trust it will exponentially decay,” he said.

He says Microsoft sees trust along three dimensions: privacy, security and ethical use of artificial intelligence. All of these come together in his view to build a basis of trust with your customers.

Nadella said he sees privacy as a human right, pure and simple, and it’s up to vendors to ensure that privacy or lose the trust of their customers. “The investments around data governance is what’s going to define whether you’re serious about privacy or not,” he said. For Microsoft, they look at how transparent they are about how they use the data, their terms of service, and how they use technology to ensure that’s being carried out at runtime.

He reiterated the call he made last year for a federal privacy law. With GDPR in Europe and California’s CCPA coming on line in January, he sees a centralized federal law as a way to streamline regulations for business.

As for security, as you might expect, he defined it in terms of how Microsoft was implementing it, but the message was clear that you needed security as part of your approach to trust, regardless of how you implement that. He asked several key questions of attendees.

“Cyber is the second area where we not only have to do our work, but you have to [ask], what’s your operational security posture, how have you thought about having the best security technology deployed across the entire chain, whether it’s on the application side, the infrastructure side or on the endpoint, side, and most importantly, around identity,” Nadella said.

The final piece, one which he said was just coming into play was how you use artificial intelligence ethically, a sensitive topic for a government audience, but one he wasn’t afraid to broach. “One of the things people say is, ‘Oh, this AI thing is so unexplainable, especially deep learning.’ But guess what, you created that deep learning [model]. In fact, the data on top of which you train the model, the parameters and the number of parameters you use — a lot of things are in your control. So we should not abdicate our responsibility when creating AI,” he said.

Whether Microsoft or the US government can adhere to these lofty goals is unclear, but Nadella was careful to outline them both for his company’s benefit and this particular audience. It’s up to both of them to follow through.


By Ron Miller

Satya Nadella looks to the future with edge computing

Speaking today at the Microsoft Government Leaders Summit in Washington DC, Microsoft CEO Satya Nadella made the case for edge computing, even while pushing the Azure cloud as what he called “the world’s computer.”

While Amazon, Google and other competitors may have something to say about that, marketing hype aside, many companies are still in the midst of transitioning to the cloud. Nadella says the future of computing could actually be at the edge where computing is done locally before data is then transferred to the cloud for AI and machine learning purposes. What goes around, comes around.

But as Nadella sees it, this is not going to be about either edge or cloud. It’s going to be the two technologies working in tandem. “Now, all this is being driven by this new tech paradigm that we describe as the intelligent cloud and the intelligent edge,” he said today.

He said that to truly understand the impact the edge is going to have on computing, you have to look at research, which predicts there will be 50 billion connected devices in the world by 2030, a number even he finds astonishing. “I mean this is pretty stunning. We think about a billion Windows machines or a couple of billion smartphones. This is 50 billion [devices], and that’s the scope,” he said.

The key here is that these 50 billion devices, whether you call them edge devices or the Internet of Things, will be generating tons of data. That means you will have to develop entirely new ways of thinking about how all this flows together. “The capacity at the edge, that ubiquity is going to be transformative in how we think about computation in any business process of ours,” he said. As we generate ever-increasing amounts of data, whether we are talking about public sector kinds of use case, or any business need, it’s going to be the fuel for artificial intelligence, and he sees the sheer amount of that data driving new AI use cases.

“Of course when you have that rich computational fabric, one of the things that you can do is create this new asset, which is data and AI. There is not going to be a single application, a single experience that you are going to build, that is not going to be driven by AI, and that means you have to really have the ability to reason over large amounts of data to create that AI,” he said.

Nadella would be more than happy to have his audience take care of all that using Microsoft products, whether Azure compute, database, AI tools or edge computers like the Data Box Edge it introduced in 2018. While Nadella is probably right about the future of computing, all of this could apply to any cloud, not just Microsoft.

As computing shifts to the edge, it’s going to have a profound impact on the way we think about technology in general, but it’s probably not going to involve being tied to a single vendor, regardless of how comprehensive their offerings may be.


By Ron Miller

Harness launches Continuous Insights to measure software team performance

Jyoti Bansal, CEO and co-founder at Harness, has always been frustrated by the lack of tools to measure software development team performance. Harness is a tool that provides Continuous Delivery as a Service, and its latest offering, Continuous Insights, lets managers know exactly how their teams are performing.

Bansal says a traditional management maxim says that if you can’t measure a process, you can’t fix it, and Continuous Insights is designed to provide a way to measure engineering effectiveness. “People want to understand how good their software delivery processes are, and where they are tracking right now, and that’s what this product, Continuous Insights, is about,” Bansal explained.

He says that it is the first product in the market to provide this view of performance without pulling weeks or months of data. “How do you get data around what your current performance is like, and how fast you deliver software, or where the bottlenecks are, and that’s where there are currently a lot of visibility gaps,” he said. He adds, “Continuous Insights makes it extremely easy for engineering teams to clearly measure and track software delivery performance with customizable, dashboards.”

Harness measures four key metrics as defined by DevOps Research and Assessment (DORA) in their book Accelerate. These include deployment frequency, lead time, mean-time-to-recovery and failure change rate. “Any organization that can do a better job with these would would really out-innovate their peers and competitors,” he said. Conversely companies doing badly on these four metrics are more likely to fall behind in the market.

pasted image 0

Image: Harness

By measuring these four areas, it not only provides a way to track performance, he sees it as a way to gamify these metrics where each team tries to outdo one another around efficiency. While you would think that engineering would be the most data-driven organization, he says that up until now it has lacked the tooling. He hopes that Harness users will be able to bring that kind of rigor to engineering.


By Ron Miller

Render challenges the cloud’s biggest vendors with cheaper, managed infrastructure

Render, a participant in the TechCrunch Disrupt SF Startup Battlefield, has a big idea. It wants to take on the world’s biggest cloud vendors by offering developers a cheaper alternative that also removes a lot of the complexity around managing cloud infrastructure.

Render’s goal is to help developers, especially those in smaller companies, who don’t have large DevOps teams, to still take advantage of modern development approaches in the cloud. “We are focused on being the easiest and most flexible provider for teams to run any application in the cloud,” CEO and founder Anurag Goel explained.

He says that one of the biggest pain points for developers and startups, even fairly large startups, is that they have to build up a lot of DevOps expertise when they run applications in the cloud. “That means they are going to hire extremely expensive DevOps engineers or consultants to build out the infrastructure on AWS,” he said. Even after they set up the cloud infrastructure, and move applications there, he points out that there is ongoing maintenance around patching, security and identity access management. “Render abstracts all of that away, and automates all of it,” Goel said.

It’s not easy competing with the big players on scale, but he says so far they have been doing pretty well, and plan to move much of their operations to bare metal servers, which he believes will help stabilize costs further.

“Longer term, we have a lot of ideas [about how to reduce our costs], and the simplest thing we can do is to switch to bare metal to reduce our costs pretty much instantly.” He says the way they have built Render will make that easier to do. The plan now is to start moving their services to bare metal in the fourth quarter this year.

Even though the company only launched in April, it is already seeing great traction. “The response has been great. We’re now doing over 100 million HTTP requests every week. And we have thousands of developers and startups and everyone from people doing small hobby projects to even a major presidential campaign,” he said.

Although he couldn’t share the candidate’s name, he said they were using Render for everything including their infrastructure for hosting their web site and their back-end administration. “Basically all of their cloud infrastructure is on Render,” he said.

Render has raised a $2.2 million seed round and is continuing to add services to the product, including several new services it will announce this week around storage, infrastructure as code and one-click deployment.


By Ron Miller