New Relic is changing its pricing model to encourage broader monitoring

In the monitoring world, typically when you spin up a new instance, you pay a fee to monitor it. If you are particularly active in any given month, that can result in a hefty bill at the end of the month. That leads to limiting what you choose to monitor to control costs. New Relic wants to change that, and today it announced that it’s moving to a model where customers pay by the user instead with a smaller less costly data component.

The company is also simplifying its product set with the goal of encouraging customers to instrument everything instead of deciding what to monitor and what to leave out to control cost. “What we’re announcing is a completely reimagined platform. We’re simplifying our products from 11 to three, and we eliminate those barriers to standardizing on a single source of truth,” New Relic founder and CEO Lew Cirne told TechCrunch.

The way the company can afford to make this switch is by exposing the underlying telemetry database that it created to run its own products. By taking advantage of this database to track all of your APM, tracing and metric data all in one place, Cirne says they can control costs much better and pass those savings onto customers, whose bills should be much smaller based on a this new pricing model, he said.

“Prior to this, there has not been any technology that’s good at gathering all of those data types into a single database, what we would call a telemetry database. And we actually created one ourselves and it’s the backbone of all of our products. [Up until now], we haven’t really exposed it to our customers, so that they can put all their data into it,” he said.

New Relic Telemetry Data. Image: New Relic

The company is distilling the product set into three main categories. The first is the Telemetry Data Platform, which offers a single way to gather any events, logs or traces, whether from their agents or someone else’s or even open source monitoring tools like Prometheus.

The second product is called Full-stack Observability. This includes all of their previous products, which were sold separately such as APM, mobility, infrastructure and logging. Finally they are offering an intelligence layer called New Relic AI.

Cirne says by simplifying the product set and changing the way they bill, it will save customers money through the efficiencies they have uncovered. In practice he says, pricing will consist of a combination of users and data, but he believes their approach will result in much lower bills and more cost certainty for customers.

“It’ll vary by customer so this is just a rough estimate but imagine that the typical New Relic bill under this model will be a 70% per user charge and 30% data charge, roughly, but so if that’s the case, and if you look at our competitors, 100% of the bill is data,” he said.

The new approach is available starting today. Companies can try it with 100 GB single user account.


By Ron Miller

Google Cloud gives developers more insights into their networks

Google Cloud is launching a new feature today that will give its users a new way to monitor and optimize how their data flows between their servers in the Google Cloud and other Google Services, on-premises deployments and virtually any other internet endpoint. As the name implies, VPC Flow Logs are meant for businesses that already use Google’s Virtual Private Cloud features to isolate their resources from other users.

VPC Flow Logs monitors and logs all the network flows (both UDP and TCP) that are sent from and received by the virtual machines inside a VPC, including traffic between Google Cloud regions. All of that data can be exported to Stackdriver Logging or BigQuery, if you want to keep it in the Google Cloud, or you can use Cloud Pub/Sub to export it to other real-time analytics or security platform. The data updates every five seconds and Google premises that using this service has no impact on the performance of your deployed applications.

As the company notes in today’s announcement, this will allow network operators to get far more insights into the details of how the Google network performs and to troubleshoot issues if they arise. In addition, it will also allow them to optimize their network usage and costs by giving them more information about their global traffic.

All of this data is also quite useful for performing forensics when it looks like somebody may have gotten into your network, too. If that’s your main use case, though, you probably want to export your data to a specialized security information and event management (SIEM) platform from vendors like Splunk or ArcSight.