RealityEngines.AI raises $5.25M seed round to make ML easier for enterprises

RealityEngines.AI, a research startup that wants to help enterprises make better use of AI, even when they only have incomplete data, today announced that it has raised a $5.25 million seed funding round. The round was led by former Google CEO and Chairman Eric Schmidt and Google founding board member Ram Shriram. Khosla Ventures, Paul Buchheit, Deepchand Nishar, Elad Gil, Keval Desai, Don Burnette and others also participated in this round.

The fact that the service was able to raise from this rather prominent group of investors clearly shows that its overall thesis resonates. The company, which doesn’t have a product yet, tells me that it specifically wants to help enterprises make better use of the smaller and noisier datasets they have and provide them with state-of-the-art machine learning and AI systems that they can quickly take into production. It also aims to provide its customers with systems that can explain their predictions and are free of various forms of bias, something that’s hard to do when the system is essentially a black box.

As RealityEngines CEO Bindu Reddy, who was previously the head of products for Google Apps, told me the company plans to use the funding to build out its research and development team. The company, after all, is tackling some of the most fundamental and hardest problems in machine learning right now — and that costs money. Some, like working with smaller datasets, already have some available solutions like generative adversarial networks that can augment existing datasets and that RealityEngines expects to innovate on.

Reddy is also betting on reinforcement learning as one of the core machine learning techniques for the platform.

Once it has its product in place, the plan is to make it available as a pay-as-you-go managed service that will make machine learning more accessible to large enterprise, but also to small and medium businesses, which also increasingly need access to these tools to remain competitive.


By Frederic Lardinois

Apollo raises $22M for its GraphQL platform

Apollo, a San Francisco-based startup that provides a number of developer and operator tools and services around the GraphQL query language, today announced that it has raised a $22 million growth funding round co-led by Andreessen Horowitz and Matrix Partners. Existing investors Trinity Ventures and Webb Investment Network also participated in this round.

Today, Apollo is probably the biggest player in the GraphQL ecosystem. At its core, the company’s services allow businesses to use the Facebook-incubated GraphQL technology to shield their developers from the patchwork of legacy APIs and databases as they look to modernize their technology stacks. The team argues that while REST APIs that talked directly to other services and databases still made sense a few years ago, it doesn’t anymore now that the number of API endpoints keeps increasing rapidly.

Apollo replaces this with what it calls the Data Graph. “There is basically a missing piece where we think about how people build apps today, which is the piece that connects the billions of devices out there,” Apollo co-founder and CEO Geoff Schmidt told me. “You probably don’t just have one app anymore, you probably have three, for the web, iOS and Android . Or maybe six. And if you’re a two-sided marketplace you’ve got one for buyers, one for sellers and another for your ops team.” Managing the interfaces between all of these apps quickly becomes complicated and means you have to write a lot of custom code for every new feature. The promise of the Data Graph is that developers can use GraphQL to query the data in the graph and move on, all without having to write the boilerplate code that typically slows them down. At the same time, the ops teams can use the Graph to enforce access policies and implement other security features.

“If you think about it, there’s a lot of analogies to what happened with relational databases in the 80s,” Schmidt said. “There is a need for a new layer in the stack. Previously, your query planner was a human being, not a piece of software, and a relational databased is a piece of software that would just give you a database. And you needed a way to query that database and that syntax was called SQL.”

Geoff Schmidt, Apollo CEO, and Matt DeBergalis, CTO.

GraphQL itself, of course, is open source. Apollo is now building a lot of the proprietary tools around this idea of the Data Graph that make it useful for businesses. There’s a cloud-hosted graph manager, for example, that lets you track your schema, as well as a dashboard to track performance, as well as integrations with continuous integration services. “It’s basically a set of services that keep track of the metadata about your graph and help you manage the configuration of your graph and all the workflows and processes around it,” Schmidt said.

The development of Apollo didn’t come out of nowhere. The founders previously launched Meteor, a framework and set of hosted services that allowed developers to write their apps in JavaScript, both on the front-end and back-end. Meteor was tightly coupled to MongoDB, though, which worked well for some use cases but also held the platform back in the long run. With Apollo, the team decided to go in the opposite direction and instead build a platform that makes being database agnostic the core of its value proposition.

The company also recently launched Apollo Federation, which makes it easier for businesses to work with a distributed graph. Sometimes, after all, your data lives in lots of different places. Federation allows for a distributed architecture that combines all of the different data sources into a single schema that developers can then query.

Schmidt tells me that the company started to get some serious traction last year and by December, it was getting calls from VCs that heard from their portfolio companies that they were using Apollo.

The company plans to use the new funding to build out its technology to scale its field team to support the enterprises that bet on its technology, including the open source technologies that power both the service.

“I see the Data Graph as a core new layer of the stack, just like we as an industry invested in the relational databased for decades, making it better and better,” Schmidt said. “We’re still finding new uses for SQL and that relational database model. I think the Data Graph is going to be the same way.”


By Frederic Lardinois

WhatsApp is finally going after outside firms that are abusing its platform

WhatsApp has so far relied on past dealings with bad players within its platform to ramp up its efforts to curtail spam and other automated behavior. The Facebook -owned giant has now announced an additional step it plans to take beginning later this year to improve the health of its messaging service: going after those whose mischievous activities can’t be traced within its platform.

The messaging platform, used by more than 1.5 billion users, confirmed on Tuesday that starting December 7 it will start considering signals off its platform to pursue legal actions against those who are abusing its system. The company will also go after individuals who — or firms that — falsely claim to have found ways to cause havoc on the service.

The move comes as WhatsApp grapples with challenges such as spam behavior to push agendas or spread of false information on its messaging service in some markets. “This serves as notice that we will take legal action against companies for which we only have off-platform evidence of abuse if that abuse continues beyond December 7, 2019, or if those companies are linked to on-platform evidence of abuse before that date,” it said in an FAQ post on its site.

A WhatsApp spokesperson confirmed the change to TechCrunch, adding, “WhatsApp was designed for private messaging, so we’ve taken action globally to prevent bulk messaging and enforce limits on how WhatsApp accounts that misuse WhatsApp can be used. We’ve also stepped up our ability to identify abuse, which helps us ban 2 million accounts globally per month.”

Earlier this year, WhatsApp said (PDF) it had built a machine learning system to detect and weed out users who engage in inappropriate behavior such as sending bulk messages or creating multiple accounts with intention to harm the service. The platform said it was able to assess the past dealings with problematics behaviors to ban 20% of bad accounts at the time of registration itself.

But the platform is still grappling to contain abusive behavior, a Reuters report claimed last month. The news agency reported about tools that were readily being sold in India for under $15 that claimed to bypass some of the restrictions that WhatsApp introduced in recent months.

TechCrunch understands that with today’s changes, WhatsApp is going after those same set of bad players. It has already started to send cease and desist letters to marketing companies that claim to abuse WhatsApp in recent months, a person familiar with the matter said.


By Manish Singh

GitHub hires former Bitnami co-founder Erica Brescia as COO

It’s been just over a year since Microsoft bought GitHub for $7.5 billion, but the company has grown in that time, and today it announced that it has hired former Bitnami COO and cofounder, Erica Brescia to be its COO.

Brescia handled COO duties at Bitnami from its founding in 2011 until it was sold to VMware last month. In a case of good timing, GitHub was looking to fill its COO role and after speaking to CEO Nat Friedman, she believed it was going to be a good fit. The GitHub mission to provide a place for developers to contribute to various projects fits in well with what she was doing at Bitnami, which provided a way to deliver software to developers in the form of packages such as containers or Kubernetes Helm charts.

New GitHub COO Erica Brescia

She sees that experience of building a company, of digging in and taking on whatever roles the situation required, translating well as she takes over as COO at a company that is growing as quickly as GitHub. “I was really shocked to see how quickly GitHub is still growing, and I think bringing that kind of founder mentality, understanding where the challenges are and working with a team to come up with solutions, is something that’s going to translate really well and help the company to successfully scale,” Brescia told TechCrunch.

She admits that it’s going to be a different kind of challenge working with a company she didn’t help build, but she sees a lot of similarities that will help her as she moves into this new position. Right after selling a company, she obviously didn’t have to take a job right away, but this one was particularly compelling to her, too much so to leave on the table.

“I think there were a number of different directions that I could have gone coming out of Bitnami, and GitHub was really exciting to me because of the scale of the opportunity and the fact that it’s so focused on developers and helping developers around the world, both open source and enterprise, collaborate on the software that really powers the world moving forward,” she said.

She says as COO at a growing company, it will fall on her to find more efficient ways to run things as the company continues to scale. “When you have a company that’s growing that quickly, there are inevitably things that probably could be done more efficiently at the scale, and so one of the first things that I plan on spending time in on is just understanding from the team is where the pain points are, and what can we do to help the organization run like a more well oiled machine.”


By Ron Miller

Google Cloud gets capacity reservations, extends committed use discounts beyond CPUs

Google Cloud made two significant pricing announcements today. Those, you’ll surely be sad to hear, don’t involve the usual price drops for compute and storage. Instead, Googe Cloud today announced that it is extending its committed use discounts, which give you a significant discount when you commit to using a certain number of resources for one or three years, to GPUs, Cloud TPU Pods and local SSDs. In return for locking yourself into a long-term plan, you can get discounts of 55 percent off on-demand prices.

In addition, Google launching a capacity reservation system for Compute Engine that allows users to reserve resources in a specific zone for later use to ensure that they have guaranteed access to these resources when needed.

At first glance, capacity reservations may seem like a weird concept in the cloud. The promise of cloud computing, after all, is that you can just spin machines up and down at will — and never really have to think about availability.

So why launch a reservation system? “This is ideal for use cases like disaster recovery or peace of mind, so a customer knows that they have some extra resources, but also for retail events like Black Friday or Cyber Monday,” Google senior product manager Manish Dalwadi told me.

These users want to have absolute certainty that when they need the resources, they will be available to them. And while many of us think of the large clouds as having a virtually infinite amount of virtual machines available at any time, some machine types may occasionally only be available in a different availability zone, for example, that is not the same zone as where the rest of your compute resources are.

Users can create or delete reservations at any time and any existing discounts — including sustained use discounts and committed use discounts — will be applied automatically.

As for committed use discounts, it’s worth noting that Google always took a pretty flexible approach to this. Users don’t have to commit to using a specific machine type for three years, for example. Instead, they commit to using a specific number of CPU cores and memory, for example.

“What we heard from customers was that other commit models are just too inflexible and their utilization rates were very low, like 70, 60 percent utilization,” Google product director Paul Nash told me. “So one of our design goals with committed use discounts was to figure out how we could provide something that gives us the capacity planning signal that we need, provides the same amount of discounts that we want to pass on to customers, but do it in a way that customers actually feel like they are getting a great deal and so that they don’t have to hyper-manage these things in order to get the most out of them.”

Both the extended committed use discounts and the new capacity reservation system for Compute Engine resources are now live in the Google Cloud.


By Frederic Lardinois

Aion Network introduces first blockchain virtual machine for Java developers

Aion Network, a non-profit dedicated to creating tools to promote blockchain technologies, announced a new virtual machine today that’s built on top of the popular Java Virtual Machine. Its ultimate goal is increasing the popularity of blockchain with developers.

Aion CEO Matthew Spoke says one of the barriers to more widespread blockchain adoption has been a lack of tooling for developers in a common language like Java. The company believed if they could build a virtual machine specifically for blockchain on top of the Java Virtual Machine (JVM), which has been in use for years, it could help promote more extensive use of blockchain.

Today, it’s announcing the Aion Virtual Machine (AVM), a virtual machine that sits on top of the JVM. AVM makes it possible for developers to use their familiar toolset while building in the blockchain bits like smart contracts in the AVM without having to alter the JVM at all.

“We didn’t want to modify the JVM. We wanted to build some sort of supplementary software layer that can interact with the JVM. Blockchains have a set of unique criteria. They need to be deterministic; the computing needs to happen across the distributed network of nodes; and the JVM was never designed with this in mind,” Spoke explained.

Aion set out to build a virtual machine for blockchain without reinventing the wheel. It recognized that Java remains one of the most popular programming languages around, and it didn’t want to mess with that. In fact, it wanted to take advantage of the popularity by building a kind of blockchain interpreter that would sit on top of the JVM without getting in the way of it.

“Rather than trying to convince people of the merits of a new system, can we just get the system they’re already familiar with on top of the blockchain? So we started engineering towards that solution. And we’ve been working on that since for about a year at this point, leading up to our release this week to prove that we can solve that problem,” Spoke told TechCrunch.

Up to this point, Aion has been focusing on the crypto community, but the company felt to really push the blockchain beyond the realm of the true believers, it needed to come up with a way for developers who weren’t immersed in this to take advantage of it.

“Our big focus now is how do we take this message of building blockchain apps and take it into a more traditional software industry audience. Instead of trying to compete for the attention of crypto developers, we want the blockchain to become almost a micro service layer to what normal software developers are solving on a day-to-day basis,” he said.

The company is hoping that by providing this way to access blockchain services, it can help popularize blockchain concepts with developers who might not otherwise have been familiar with them. It’s but one attempt to bring blockchain to more business-oriented use cases, but the company has given this a lot of thought and believes it will help them evangelize this approach with a wider audience of developers moving forward.


By Ron Miller

How Kubernetes came to rule the world

Open source has become the de facto standard for building the software that underpins the complex infrastructure that runs everything from your favorite mobile apps to your company’s barely usable expense tool. Over the course of the last few years, a lot of new software is being deployed on top of Kubernetes, the tool for managing large server clusters running containers that Google open sourced five years ago.

Today, Kubernetes is the fastest growing open-source project and earlier this month, the bi-annual KubeCon+CloudNativeCon conference attracted almost 8,000 developers to sunny Barcelona, Spain, making the event the largest open-source conference in Europe yet.

To talk about how Kubernetes came to be, I sat down with Craig McLuckie, one of the co-founders of Kubernetes at Google (who then went on to his own startup, Heptio, which he sold to VMware); Tim Hockin, another Googler who was an early member on the project and was also on Google’s Borg team; and Gabe Monroy, who co-founded Deis, one of the first successful Kubernetes startups, and then sold it to Microsoft, where he is now the lead PM for Azure Container Compute (and often the public face of Microsoft’s efforts in this area).

Google’s cloud and the rise of containers

To set the stage a bit, it’s worth remembering where Google Cloud and container management were five years ago.


By Frederic Lardinois

Zuora Central lets developers build connected workflows across services

Zuora has been known throughout its 12-year history as a company that helps manage subscription-based businesses. Today, at its Subscribed San Francisco customer conference, it announced that it’s adding a new twist to the platform with a new service called Zuora Central.

The latest offering gives developers a workflow tool to build connections between systems that extend the given service using both Zuora’s service set and any external services that make sense. Tien Tzuo, founder and CEO at Zuora, sees this as a way for his customers to offer a set of integrated services that take advantage of the fact that these individual things are connected to the internet, whether that’s a car, an appliance, a garage door opener or a multi-million dollar medical device.

And this isn’t even necessarily about taking advantage of your smartphone, although it could include that. It’s about extending the device or service to automate a set of related tasks beyond the subscription service itself. “So you create a workflow diagram in Zuora Central, that’s going to convey all of the logic of this,” he said.

Zuora Central lets developers connect to both Zuroa services and external services. Diagram: Zuroa

As an example, Tzuo says imagine you are renting a car. You have reserved a Ford Focus, but when you get to the lot, you decide you want the Mustang convertible. You don’t have to pull out your phone. You simply walk up to the car and touch the handle. It understands who you are and begins to make a series of connections.There may be a call to unlock the car, a call to the music system to play your driving playlist on Spotify, a call to your car preferences that can set the seats and mirrors and so forth. All of this is possible because the car itself is connected to the internet.

Zuora workflow in action. Screenshot: Zuora

Under the hood, the workflow tool takes advantage of a number of different technologies to make all of this happen including a custom object model, an events and notifications system and a data query engine. All of these tools combine to let developers build these complex workflows and connect to a number of tasks, greatly enhancing the capabilities of the base Zuora platform.

As Tzuo sees it, it’s not unlike what happened when he was Chief Marketing Officer at Salesforce before starting Zuroa when they launched Force.com and the AppExchange as a way to allow developers to extend the Salesforce product beyond its base capabilities.

Tzuo also sees this platform play as a logical move for any company that aspires to be a billion dollar revenue company. The company has a ways to go in that regard. In its most recent report at the end of May, it reported $64.1 million in revenue for the quarter. Whether this new capability will do for Zuora what extending the platform did for Salesforce remains to be seen, but this is certainly a big step for the company.


By Ron Miller

How we scaled our startup by being remote first

Startups are often associated with the benefits and toys provided in their offices. Foosball tables! Free food! Dog friendly! But what if the future of startups was less about physical office space and more about remote-first work environments? What if, in fact, the most compelling aspect of a startup work environment is that the employees don’t have to go to one?

A remote-first company model has been Seeq’s strategy since our founding in 2013. We have raised $35 million and grown to more than 100 employees around the globe. Remote-first is clearly working for us and may be the best model for other software companies as well.

So, who is Seeq and what’s been the key to making the remote-first model work for us?  And why did we do it in the first place?

Seeq is a remote-first startup – i.e. it was founded with the intention of not having a physical headquarters or offices, and still operates that way – that is developing an advanced analytics application that enables process engineers and subject matter experts in oil & gas, pharmaceuticals, utilities, and other process manufacturing industries to investigate and publish insights from the massive amounts of sensor data they generate and store.

To succeed, we needed to build a team quickly with two skill sets: 1) software development expertise, including machine learning, AI, data visualization, open source, agile development processes, cloud, etc. and 2) deep domain expertise in the industries we target.

Which means there is no one location where we can hire all the employees we need: Silicon Valley for software, Houston for oil & gas, New Jersey for fine chemicals, Seattle for cloud expertise, water utilizes across the country, and so forth. But being remote-first gives has made recruiting and hiring these high-demand roles easier much easier than if we were collocated.

Image via Seeq Corporation

Job postings on remote-specific web sites like FlexJobs, Remote.co and Remote OK typically draw hundreds of applicants in a matter of days. This enables Seeq to hire great employees who might not call Seattle, Houston or Silicon Valley home – and is particularly attractive to employees with location-dependent spouses or employees who simply want to work where they want to live.

But a remote-first strategy and hiring quality employees for the skills you need is not enough: succeeding as a remote-first company requires a plan and execution around the “3 C’s of remote-first”.

The three requirements to remote-first success are the three C’s: communication, commitment and culture.


By Arman Tabatabai

The challenges of truly embracing cloud native

There is a tendency at any conference to get lost in the message. Spending several days immersed in any subject tends to do that. The purpose of such gatherings is, after all, to sell the company or technologies being featured.

Against the beautiful backdrop of the city of Barcelona last week, we got the full cloud native message at KubeCon and CloudNativeCon. The Cloud Native Computing Foundation (CNCF), which houses Kubernetes and related cloud native projects, had certainly honed the message along with the community who came to celebrate its five-year anniversary. The large crowds that wandered the long hallways of the Fira Gran Via conference center proved it was getting through, at least to a specific group.

Cloud native computing involves a combination of software containerization along with Kubernetes and a growing set of adjacent technologies to manage and understand those containers. It also involves the idea of breaking down applications into discrete parts known as microservices, which in turn leads to a continuous delivery model, where developers can create and deliver software more quickly and efficiently. At the center of all this is the notion of writing code once and being able to deliver it on any public cloud, or even on-prem. These approaches were front and center last week.

At five years old, many developers have embraced these concepts, but cloud native projects have reached a size and scale where they need to move beyond the early adopters and true believers and make their way deep into the enterprise. It turns out that it might be a bit harder for larger companies with hardened systems to make wholesale changes in the way they develop applications, just as it is difficult for large organizations to take on any type of substantive change.

Putting up stop signs


By Ron Miller

Takeaways from KubeCon; the latest on Kubernetes and cloud native development

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Frederic Lardinois and Ron Miller discuss major announcements that came out of the Linux Foundation’s European KubeCon/CloudNativeCon conference and discuss the future of Kubernetes and cloud-native technologies.

Nearly doubling in size year-over-year, this year’s KubeCon conference brought big news and big players, with major announcements coming from some of the world’s largest software vendors including Google, AWS, Microsoft, Red Hat, and more. Frederic and Ron discuss how the Kubernetes project grew to such significant scale and which new initiatives in cloud-native development show the most promise from both a developer and enterprise perspective.

“This ecosystem starts sprawling, and we’ve got everything from security companies to service mesh companies to storage companies. Everybody is here. The whole hall is full of them. Sometimes it’s hard to distinguish between them because there are so many competing start-ups at this point.

I’m pretty sure we’re going to see a consolidation in the next six months or so where some of the bigger players, maybe Oracle, maybe VMware, will start buying some of these smaller companies. And I’m sure the show floor will look quite different about a year from now. All the big guys are here because they’re all trying to figure out what’s next.”

Frederic and Ron also dive deeper into the startup ecosystem rapidly developing around Kubernetes and other cloud-native technologies and offer their take on what areas of opportunity may prove to be most promising for new startups and founders down the road.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 


By Arman Tabatabai

Andreessen pours $22M into PlanetScale’s database-as-a-service

PlanetScale’s founders invented the technology called Vitess that scaled YouTube and Dropbox. Now they’re selling it to any enterprise that wants their data both secure and consistently accessible. And thanks to its ability to re-shard databases while they’re operating, it can solve businesses’ troubles with GDPR, which demands they store some data in the same locality as the user it belongs to.

The potential to be a computing backbone that both competes with and complements Amazon’s AWS has now attracted a mammoth $22 million Series A for PlanetScale. Led by Andreessen Horowitz and joined by the firm’s Cultural Leadership Fund, head of the US Digital Service Matt Cutts plus existing investor SignalFire, the round is a tall step up from the startup’s $3 million seed it raised a year ago.

“What we’re discovering is that people we thought were at one point competitors, like AWS and hosted relational databases — we’re discovering they may be our partners instead since we’re seeing a reasonable demand for our services in front of AWS’ hosted databases” says CEO Jitendra Vaidya.

PlanetScale co-founders (from left): Jiten Vaidya and Sugu Sougoumarane

Vitess, a predescessor to Kubernetes, is a horizontal scaling sharding middleware built for MySQL. It lets businesses segment their database to boost memory efficiency without sacrificing reliable access speeds. PlanetScale sells Vitess in four ways: hosting on its database-as-a-service, licensing of the tech that can be run on-premises for clients or through another cloud provider, professional training for using Vitess, and on-demand support for users of the open-source version of Vitess.

“We don’t have any concerns about the engineering side of things, but we need to figure out a go-to-market strategy for enterprises” Vaidya explains. “As we’re both technical co-founders, about half of our funding is going towards hiring those functions [outside of engineering], and making that part of our organization work well and get results.”


By Josh Constine

Serverless and containers: Two great technologies that work better together

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native put simply involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google says the Kubernetes community has really embraced the serveless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points there isn’t a one size fits all approach to cloud native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open source community and vendors will respond with tools to help them. Bringing serverless and containers is together is just one example of that.


By Ron Miller

OpenFin raises $17 million for its OS for finance

OpenFin, the company looking to provide the operating system for the financial services industry, has raised $17 million in funding through a Series C round led by Wells Fargo, with participation from Barclays and existing investors including Bain Capital Ventures, J.P. Morgan and Pivot Investment Partners. Previous investors in OpenFin also include DRW Venture Capital, Euclid Opportunities and NYCA Partners.

Likening itself to “the OS of finance”, OpenFin seeks to be the operating layer on which applications used by financial services companies are built and launched, akin to iOS or Android for your smartphone.

OpenFin’s operating system provides three key solutions which, while present on your mobile phone, has previously been absent in the financial services industry: easier deployment of apps to end users, fast security assurances for applications, and interoperability.

Traders, analysts and other financial service employees often find themselves using several separate platforms simultaneously, as they try to source information and quickly execute multiple transactions. Yet historically, the desktop applications used by financial services firms — like trading platforms, data solutions, or risk analytics — haven’t communicated with one another, with functions performed in one application not recognized or reflected in external applications.

“On my phone, I can be in my calendar app and tap an address, which opens up Google Maps. From Google Maps, maybe I book an Uber . From Uber, I’ll share my real-time location on messages with my friends. That’s four different apps working together on my phone,” OpenFin CEO and co-founder Mazy Dar explained to TechCrunch. That cross-functionality has long been missing in financial services.

As a result, employees can find themselves losing precious time — which in the world of financial services can often mean losing money — as they juggle multiple screens and perform repetitive processes across different applications.

Additionally, major banks, institutional investors and other financial firms have traditionally deployed natively installed applications in lengthy processes that can often take months, going through long vendor packaging and security reviews that ultimately don’t prevent the software from actually accessing the local system.

OpenFin CEO and co-founder Mazy Dar. Image via OpenFin

As former analysts and traders at major financial institutions, Dar and his co-founder Chuck Doerr (now President & COO of OpenFin) recognized these major pain points and decided to build a common platform that would enable cross-functionality and instant deployment. And since apps on OpenFin are unable to access local file systems, banks can better ensure security and avoid prolonged yet ineffective security review processes.

And the value proposition offered by OpenFin seems to be quite compelling. Openfin boasts an impressive roster of customers using its platform, including over 1,500 major financial firms, almost 40 leading vendors, and 15 out of the world’s 20 largest banks.

Over 1,000 applications have been built on the OS, with OpenFin now deployed on more than 200,000 desktops — a noteworthy milestone given that the ever popular Bloomberg Terminal, which is ubiquitously used across financial institutions and investment firms, is deployed on roughly 300,000 desktops.

Since raising their Series B in February 2017, OpenFin’s deployments have more than doubled. The company’s headcount has also doubled and its European presence has tripled. Earlier this year, OpenFin also launched it’s OpenFin Cloud Services platform, which allows financial firms to launch their own private local app stores for employees and customers without writing a single line of code.

To date, OpenFin has raised a total of $40 million in venture funding and plans to use the capital from its latest round for additional hiring and to expand its footprint onto more desktops around the world. In the long run, OpenFin hopes to become the vital operating infrastructure upon which all developers of financial applications are innovating.

Apple and Google’s mobile operating systems and app stores have enabled more than a million apps that have fundamentally changed how we live,” said Dar. “OpenFin OS and our new app store services enable the next generation of desktop apps that are transforming how we work in financial services.”


By Arman Tabatabai

VMware acquires Bitnami to deliver packaged applications anywhere

VMware announced today that it’s acquiring Bitnami, the package application company that was a member of the Y Combinator Winter 2013 class. The companies didn’t share the purchase price.

With Bitnami, the company can now deliver more than 130 popular software packages in a variety of formats such as Docker containers or virtual machine, an approach that should be attractive for VMware as it makes its transformation to be more of a cloud services company.

“Upon close, Bitnami will enable our customers to easily deploy application packages on any cloud — public or hybrid — and in the most optimal format — virtual machine (VM), containers and Kubernetes helm charts. Further, Bitnami will be able to augment our existing efforts to deliver a curated marketplace to VMware customers that offers a rich set of applications and development environments in addition to infrastructure software,” the company wrote in a blog post announcing the deal.

Per usual, Bitnami’s founders see the exit through the prism of being able to build out the platform faster with the help of a much larger company. “Joining forces with VMware means that we will be able to both double-down on the breadth and depth of our current offering and bring Bitnami to even more clouds as well as accelerating our push into the enterprise,” the founders wrote in a blog post on the company website.

The company has raised a modest $1.1 million since its founding in 2011 and says that it has been profitable since early days when it took the funding. In the blog post, the company states that nothing will change for customers from their perspective.

“In a way, nothing is changing. We will continue to develop and maintain our application catalog across all the platforms we support and even expand to additional ones. Additionally, if you are a company using Bitnami in production, a lot of new opportunities just opened up.”

Time will tell whether that is the case, but it is likely that Bitnami will be able to expand its offerings as part of a larger organization like VMware.

VMware is a member of the Dell federation of products and came over as part of the massive $67 billion EMC deal in 2016. The company operates independently, is sold as a separate company on the stock market and makes its own acquisitions.


By Ron Miller