Shopify Hits the Runway; Visa Ups the Ante: News Roundup

Payments
Shopify Hits the Runway; Visa Ups the Ante: News Roundup
Payment Facilitator
Shopify Hits the Runway; Visa Ups the Ante: News Roundup
PaymentFacilitator’s News Roundup is a curated mix of the past week’s news and articles from around the web, including company announcements, global payments news, and other coverage and analysis of topics relevant to payment facilitators.

Part 2: Micro-segmentation as a Composition Challenge — Intent, Enforcement and Change

Networks
Part 2: Micro-segmentation as a Composition Challenge — Intent, Enforcement and Change
Apstra Blog
Part 2: Micro-segmentation as a Composition Challenge — Intent, Enforcement and Change

In a previous blog post, I covered some of the benefits of micro-segmentation and how Intent-Based Networking can help address the associated network implementation challenges.

In this blog post, I’ll cover some approaches to help reduce operational intricacies and how to manage the network changes.

Composing intent and enforcement

To reduce operational complexity micro-segmentation policy must be portable across your private cloud and public clouds A, B and C. By portable I mean it looks the same. I don’t mean it “can be ported” as “can be ported” is a landmine of operational complexity. So if this is a nirvana we are seeking, why is not out there yet? Because in a pre-Intent-Based Networking world APIs and abstractions are much more tied to implementation than they should be or have to be. You start with implementation and then try to put some abstraction on top of it. It should be the other way around. By “implementation” here I specifically mean how is the segmentation “enforced” and what/where are the enforcement points. So let’s review what is out there.

In a majority of cases, the mechanism is in the form of an Access Control List (ACL) with the difference being where it is applied. An ACL is for micro-segmentation what a forwarding table is for reachability. It may be applied on firewalls acting as choke points. In some designs it is applied to “top of the rack” switches or ToRs. In other cases, it is applied in the server/hypervisor. ACLs are precious resources and any reference design has to take into account the limitations of how many ACLs can be supported on any enforcement point.

Most modern practices do allow micro-segmentation policies to be expressed in terms of endpoints and application workloads. Though this is typically constrained to a single domain (private or public cloud) it, nonetheless, is a higher level of abstraction than ACLs. There are a few challenges with translating these micro-segmentation intent policies into ACLs.

The first impedance mismatch is that intent policies are relational while ACLs are hierarchical. At the intent level you specify relationships between pair of endpoints. Keep in mind that endpoints can be of different granularity — they may be an IP endpoint or virtual network or security zone or Internet. When you render them as ACLs on a specific enforcement point, the ordering of rules (hierarchy) matters when a conflict occurs. ACL as a hierarchical system is very good at telling you exactly what is going to happen, but it is difficult to know if it is doing what you want it to do. Your intent is A can talk to B on port 80, which is easy to reason, but when you look at the actual ACL implementation there is a long list of rules which make it difficult to determine if there are conflicts that may not meet your intent.

As an example, conflict exists when the same packet matches two different policies that have different actions in the case of a match (say allow vs. deny). The first challenge is how to compose the policies that exist between different endpoints. You need to alert a user when a conflict exists and resolve the conflict based on a global policy or ask the user for input. To be able to do this effectively, your system must be able to reason about the existence of the conflict. Yes, you guessed it, a single source of truth that stores your intent will be able to reason about that existence.

If a micro-segmentation API that your provider/solution exposes to you looks hierarchical (it asks you for rule ordering, for example) you should worry, as it means it is driven by implementation and may not be portable. If your micro-segmentation API documentation warns you that there may be conflicts if you apply rules at both the virtual network level as well as at the individual endpoint level, it means it is not doing that conflict reasoning for you. A genuine Intent-Based Networking system will inherently perform this conflict reasoning for you as validation is a key concept in these systems.

You may hear arguments that doing whitelist ACLs as close as possible to endpoints (read server/hypervisor) is all you need. But there are quite a few situations when that is not true. Your IoT endpoints may not have ACL capability, yet they are plugged into our network. Where do you enforce rules for them? The abstraction of endpoints in your policy must have policies as an integral part of their specification. But that does not mandate that the implementation of endpoints must implement policy enforcement mechanisms.

This decoupling is crucial. You will want to place IoT endpoints in groups (say sensors of the same type) and have the policy follow them wherever they appear. Intent captures this easily and your Intent-Based Networking system handles conflicts and enforcement points. APIs slapped on top of enforcement mechanisms are not necessarily higher-level abstraction. If you are anticipating an infrastructure with a mix of IoT devices and compute nodes, you will need a system that decouples security enforcement from micro-segmentation policy. Not to mention that reachability has to be there as well.

Dealing with change

Say you discovered your house and workloads and have established some baseline. The next thing is how do you deal with the change? One approach is to keep doing discovery. If you chose this approach you will constantly be at 80-90% accuracy. A more robust solution is to have an explicit definition of new workloads which is supplied as part of the intent going forward. This way you can actually start approaching 100% accuracy as legacy workflows will become hardened over time and the new workloads will be defined using a “zero trust” model. Your discovery capability becomes a powerful validation tool and less of an intent re-construction tool.

Possible changes include endpoints, their locations, policies applied to them, enforcement points, reachability policies, and things failing. You need an event driven system to adapt to any of these changes. You need to be able to subscribe to changes of interest at the intent and operational level and reason programmatically about them to adapt to a new state. This real time aspect is core to Intent-Based Networking Level 2 systems as defined in Intent-Based Networking Taxonomy.

Let’s illustrate why dealing with change is the most difficult aspect of any implementation. When you start from scratch, you can write a workflow that goes from a blank sheet to a fully functional system with millions of objects fairly easy. It is not the complexity of the end result that matters but the emptiness of the starting point. No preconditions to be checked. You cannot break anything or violate some existing policy and the end result is predictable. Day 0 and Day 1 are sunny. Then you find that you need to change something small. It is not the size of the change that matters but the potential impact of it. You need to check preconditions (and there are millions of objects to potentially reason about) and make sure you don’t disrupt an existing customer or application.

Micro-segmentation helps with limiting the blast radius and is a proven technique, but to reason about this smaller blast radius, you need a system that allows you to ask all the right questions about all the segments and their relationships. Only that capability will make your Day 2 sunny as well.

With Intent-Based Networking, Day 2 operations are implemented using the same model and mechanisms as Day 0 and Day 1 operations. The only difference is the nature of the change and the pre-conditions and post-conditions, but the mechanism is the same. In fact, if you have different systems to deal with Day 0, Day1, and Day 2 operations you are in trouble.

Conclusion

To put it simply, micro-segmentation provides great benefits, but is difficult to implement in a scalable, portable fashion without a single source of truth that ties together policy and enforcement, reachability and security, as well as Day 0, 1 and 2 operations. By defining policies at the endpoint level using intent you embed these policies in the infrastructure and move away from point products and tools that deal with enforcement silos. This is essential as these enforcement points are becoming increasingly distributed. Border Gateway Protocol (BGP) is the control plane for reachability. Event driven, single source of truth is the control plane for micro-segmentation.

About the Author – Sasha Ratkovic:

Sasha Ratkovic is a thought leader in Intent-Based Analytics and a very early pioneer in Intent-Based Networking. He has deep expertise in domain abstraction and intent-driven automation. Sasha holds a Ph.D. in Electrical Engineering from UCLA.

Part 1: Micro-segmentation as a Composition Challenge

Networks
Part 1: Micro-segmentation as a Composition Challenge
Apstra Blog
Part 1: Micro-segmentation as a Composition Challenge

There have been many good articles recently written about micro-segmentation and the benefits derived. Almost all of them suggest strategies about how to approach the implementation as it is not a trivial task. Intent-Based Networking is well positioned to address these and other network implementation challenges — and this blog will explain the reason. First, let’s review the benefits of micro-segmentation.

Benefits of Micro-segmentation

Enumerating the benefits of micro-segmentation is like borrowing pages from Sun Tzu’s “The Art Of War.” It shifts focus, at least from the security perspective, from reacting to the enemy to focusing on your own strengths. It is a strategy from which victory evolves without having to fight the enemy. You ensure the safety of your defense by holding positions that cannot be attacked.

What are one’s own strengths? An example of a brilliant analogy can be found in this article about 3 Networking Innovations Businesses Desperately Need: “You know your house better than an attacker.” You understand how it is segmented better and you reduce the attack surface and the blast radius. Further down, I will describe an approach to implementing this strategy. (Hint: knowing your house is like knowing your intent). Let’s start with describing the challenges.

“Knowing Your House” Challenge

If you are relying on knowing your house to gain advantage, you better know your house well. Using the “whitelist” model you can explicitly define what communication patterns are allowed and you may lose the battle by making mistakes at this stage. This is especially daunting when your house is complex. If we switch from an analogy model to the technical problem domain, the question is how do you determine which workload endpoints you have so that you can define and “hold the positions that cannot be attacked?”

One way to do this is to perform discovery of applications and their behaviors and then build model representations based on these behaviors. This is a tremendous help in dealing with application level “brownfield.”  There are “whitelist” solutions on the market, but these lack the discovery of “brown field” applications which has proven to be a big pain point. The fundamental issue is that the discovery frequently relies on Machine Learning (ML) or Artificial Intelligence (AI) techniques (which are 80-90% accurate) to map what is on the wire to the actual workload communication patterns is a non-deterministic, error prone, and expensive process.

This is in dramatic contrast with the “five-nines” requirement for network uptime. To make issues worse, some of the “discovered” workloads may be malicious ones, so you need to deal with that as well. Or maybe you are the lucky one (or should I say proactive one) and you have all these workloads known. Whether the definition comes from a spreadsheet or from the discovery you need to represent this knowledge with a data model and queryable data store. Knowing your house is knowing the intent. This intent must be stored in a single source of truth, as described in Intent-Based Networking Taxonomy. Level 1 Intent-Based Networking has this single source of truth.

Composing Reachability and Security

Segmentation, (micro-segmentation’s cousin with less granularity) has been embedded in some standard networking constructs such as routes, VLANs and VRFs for some time. While these allowed for segmentation, they were (and still are) constructs whose primary purpose is enabling “reachability.” The primary goal of routing protocols is to exchange reachability information and do this at scale (the reason for mentioning scale will become obvious in a moment). You can implement segmentation at a reachability level by partitioning the reachability resources (route filtering, VLANs, VRFs). Micro-segmentation requires higher granularity in order to help with security.

More importantly reachability is a prerequisite or dependency for micro-segmentation. As such micro-segmentation requires two functions, reachability and security, to be composed coherently. To enable security requirements “endpoint A must be able to talk to only endpoint B and only on the port 80” will these endpoints be reachable in the first place. You can place Access Control List (ACL) rules on your servers all day long, but if there is no reachability between them, those ACLs will be no-ops.

Interactions between security and reachability can — and usually do — get more interesting. You may decide to segment reachability domains so that you can re-use IP addressing resources for resource optimization or workload portability purposes and within these reachability domains you may decide to apply micro-segmentation policies. The phrase “within these reachability domains” implies knowledge that a given endpoint is a member of a particular reachability domain — and is subject to micro-segmentation policy.

Say for example,  you want to have your workloads span the public and private cloud. “Span” here requires managing security and reachability in an integrated fashion. One option is to lay down a pipe between the clouds and leak the routes, but you must also punch holes through your whitelist policies. Again, integrated reachability and security is at the heart of the challenge.

Some of the most widely used Software-Defined Network (SDN) solutions have the interesting property that reachability and security are dealt with using the same mechanism. This is because the high granularity of explicit reachability information (mapping of overlay addressing space to an underlay) inherently allows for micro-segmentation which solves the composition challenge.

This comes at a cost, though. Routing protocols have evolved (and hardened in the process) over the last few decades and a lot of effort has been put into solving reachability at scale where SDN has had challenges. Decoupling reachability and security at the implementation level is critical for these two functions to evolve independently and at their own speeds. This does not mean that endpoint intent needs to be decoupled as well, though. On the contrary, you may want your intent specification to be as simple as possible and make reachability and security properties of the endpoints, yet decoupled from how that intent is implemented.

You should have the choice between implementing a pure SDN solution that fits your scale requirements or a decoupled implementation that offers you more choice and scale down the road. Simplicity is usually used as a valid argument for SDN overlays. Intent-Based Networking delivers the same simplicity without being bound to specific implementation, thus offering choice and agility.

This challenge points to the fundamental requirement for an Intent-Based Networking system to serve as a single source of truth across different functions which in this example is reachability and security. When different functions live in different tools or data stores the operator must stitch them together which increases the danger of making a mistake, introducing a vulnerability, and losing the battle.

Reducing (or controlling) the attack surface is a key element of a secure solution which is primarily achieved by controlling reachability. The two need to be tightly coordinated. Interactions with other functions, such as Load Balancing (LB), Quality of Service (QoS), or High Availability (HA) whose logical place is in the single source of truth are left as an exercise for the reader or may even be the subject of a follow-up article (let me know your thoughts in the comments).

In my next blog post, I’ll delve into intent and enforcement and what that means for reducing the operational complexities of micro-segmentation policy.

About the Author – Sasha Ratkovic:

Sasha Ratkovic is a thought leader in Intent-Based Analytics and a very early pioneer in Intent-Based Networking. He has deep expertise in domain abstraction and intent-driven automation. Sasha holds a Ph.D. in Electrical Engineering from UCLA.

Registration is Now Open for PF WORLD 2019: Don’t Miss Your Spot!

Payments
Registration is Now Open for PF WORLD 2019: Don’t Miss Your Spot!
Payment Facilitator
Registration is Now Open for PF WORLD 2019: Don’t Miss Your Spot!
Registration is now open; the location has been secured and the agenda is nearly finalized. PF WORLD is back by popular demand and this time we’ve added an extra day to give you even more of what you want and asked for.

PFs Make Showing on Forbes Fintech 50

Payments
PFs Make Showing on Forbes Fintech 50
Payment Facilitator
PFs Make Showing on Forbes Fintech 50
This week, Forbes magazine announced its 2019 listing of the Fintech 50 – a group it describes as “innovators who are changing how people save, spend and invest.” See which payment facilitators made the list.

Yen and Euros: Shopify Enables Multiple Currencies for International Sellers

Payments
Yen and Euros: Shopify Enables Multiple Currencies for International Sellers
Payment Facilitator
Yen and Euros: Shopify Enables Multiple Currencies for International Sellers
This week, ecommerce provider Shopify introduced a new feature for its international sellers. They now can sell in multiple currencies, providing a more localized experience for their customers, while accepting payment for the sale in their own currency.

Adyen Joins Stripe, Square and PayPal; JPMorgan Goes Fintech: News Roundup

Payments
Adyen Joins Stripe, Square and PayPal; JPMorgan Goes Fintech: News Roundup
Payment Facilitator
Adyen Joins Stripe, Square and PayPal; JPMorgan Goes Fintech: News Roundup
PaymentFacilitator’s News Roundup is a curated mix of the past week’s news and articles from around the web, including company announcements, global payments news, and other coverage and analysis of topics relevant to payment facilitators.

The PF Model is Exploding. Why?

Payments
The PF Model is Exploding. Why?
Payment Facilitator
The PF Model is Exploding. Why?
The goal of this podcast is to provide the listener with a broad understanding of the PF model, the current landscape of the PF ecosystem and where the PF model is heading into the future.