PFs Help Remove Roadblocks on Way to Financial Inclusion

Payments
PFs Help Remove Roadblocks on Way to Financial Inclusion
Payment Facilitor
PFs Help Remove Roadblocks on Way to Financial Inclusion
In a report published recently, Mastercard examines the barriers to financial inclusion in emerging markets and identifies ways forward to enable broader electronic payments acceptance. Payment facilitators are well positioned to help drive this effort.

Bankers Protest Square Application, MasterCard and PayPal Go Global: News Roundup

Payments
Bankers Protest Square Application, MasterCard and PayPal Go Global: News Roundup
Payment Facilitor
Bankers Protest Square Application, MasterCard and PayPal Go Global: News Roundup
MasterCard again partners with PayPal and GoFundMe partners with Adyen, while Blackbaud expands its offerings though JustGiving acquisition and Square receives unfavorable response from the ICBA. Here’s your weekly news roundup.

AOS™ 2.0: Underlay – Overlay Integration for the Data Center

Networks
AOS™ 2.0: Underlay – Overlay Integration for the Data Center
Apstra Blog
AOS™ 2.0: Underlay – Overlay Integration for the Data Center

The introduction of overlays in the early days of SDN enabled organizations to bridge the gap between the dynamic nature of their business policies and the static nature of their network. At the same time, overlays introduced significant challenges, which limited their adoption in the enterprise.

Responding to customer requests in this area, Apstra is announcing today AOS 2.0. Leveraging the recent advances in network operating system APIs and switch silicon support for VXLAN, AOS 2.0 delivers the first intent-based integrated underlay and overlay solution for the data center network.

Around a decade ago when SDN discussions first began, switches had no APIs, and to deliver dynamic policy there was no choice but to bypass networking engineering teams and extend an overlay on top of the physical network. This approach created a number of problems that limited the adoption of the technology:

Underlays and overlays are opaque to each other. Because the underlay and overlay are completely decoupled, it is that much harder for IT teams to debug networking problems. Was it caused by the overlay? The underlay? Through which links or interfaces do the packets pertaining to this particular overlay tunnel flow?

Organizational processes break with decoupled underlays and overlays.  Overlays made it unclear who was really responsible for network services. The network engineering team? The compute team? The cloud team? Compute teams are often driven to buy and operate an overlay without the participation of network teams. That could mean that two network operators in the same data center don’t really work together. Or even acknowledge each other. Worse, the networking team is often finger-pointed, often without evidence. The network is the most critical asset in the data center; one operational team should be empowered and responsible — not two.

Overlays don’t easily work with bare metal devices. While most workloads are virtualized, there is a lot of bare metal out there; storage, database, and many devices and appliances. The common solution is to build a gateway which generally encaps/decaps flows between the overlay and a bare metal segment. I am bullish on the use of an overlay, but quite honestly, a gateway for bare metal is a hack which only became necessary because of the unnatural fracture between underlay and overlay.
.

Introducing AOS 2.0:

Integrated overlay/underlay: Leveraging the innovative AOS state repository and intent modeling technology, all the state pertaining to the physical underlay, its topology, its logical entities, virtual networks, and all related telemetry are stored in the AOS distributed data store, and represented in a graph that captures all the pertinent relationships. As a result, AOS 2.0 provides powerful visibility into network state, including the physical and virtual, through its process of closed-loop, continuous validation of state against intent. In short, with AOS 2.0, the underlay/overlay correlation problem that has plagued first generation SDN solutions becomes a thing of the past.

All under the control of the network engineering team! AOS 2.0 enables a network infrastructure that leverages a modern Leaf/Spine L3 underlay network architecture using multi-vendor state-of-the-art equipment, that features an L3 underlay and stitches L2 services as an overlay — within the rack, and across racks. Organizations can then deliver L2 connectivity for their applications, and enforce policies and security zones across their various Application services Tiers — all under the control of network teams.

Natural support for bare metal servers. With AOS 2.0, configuring a virtual network spanning two separate racks is done automatically using an API or our Web interface – “Please create a virtual network connecting these endpoints”. Under the hood, AOS 2.0: (1) configures VLANs connecting end points to Top of Rack (TOR) switches, and (2) configures VXLAN tunnels between TOR switches that belong to different racks, and (3) validates in real time that the virtual network was properly configured (e.g., by continuously ensuring that VTEP routes are seen in routing tables for all devices). This approach applies to both virtual and bare-metal end points and doesn’t requires gateways.

In addition, AOS 2.0 leverages the same AOS core to provide the same unique AOS advantages:

Intent-Based, Vendor-Agnostic, and Closed-Loop: The vendor-independent approach of AOS is taken to another level with our multi-vendor implementation of VXLAN-based virtual networks. Owing to the Intent-Based approach of AOS, arcane vendor-specific configurations of VXLAN are abstracted away from network users. And owing to its closed loop continuous validation, complicated vendor-specific troubleshooting procedures are eliminated. The result is unprecedented hardware vendor choice and interoperability across both underlay and overlay.

Fully Automated: AOS 2.0 gets us closer to the vision of a self- operating and  autonomous network infrastructure. It leverages the extensible foundation of AOS to deliver end-to-end automation of all phases in the life cycle of network services across the underlay and the overlay: design, build, deploy, and validate. This includes Day 0 Design and Initial Provisioning, Day 1 Builds and Day 2 Operational Changes and Troubleshooting capabilities. With unique system-wide commit capabilities for change operations and sophisticated continued validation and troubleshooting capabilities through intent-based analytics, AOS 2.0 delivers the most powerful autonomous operation capabilities available today.

Adding AOS 2.0’s new enterprise-class features (including RBAC, HTTPS, and Headless Operations), organizations can confidently start the process of migrating from legacy L2 data center infrastructures to modern Leaf-Spine infrastructures with fully automated and integrated L3 underlay and L2 overlay — all under the control of networking teams.

AOS 2.0 demonstrates that we are rapidly extending AOS capabilities. Customer-driven feature velocity is a key part of our vision, enabled by the extensible AOS architecture. This brings our customers expanded device support, and advanced intent-based analytics — which are coming as part of turn-key applications in future releases. Contact us to learn what AOS 2.0 can do for your network infrastructure and organization. A new era has begun,  and we’re not looking back!

Fraud Protection Partnership: Five Critical Questions Payment Facilitators Should Be Asking

Payments
Fraud Protection Partnership: Five Critical Questions Payment Facilitators Should Be Asking
Payment Facilitor
Fraud Protection Partnership: Five Critical Questions Payment Facilitators Should Be Asking
Fraud is something you never think about until it happens. Unfortunately, millions of people are finding out first hand just how frustrating the whole experience can be.

Intent-Based Analytics: What is it?

Networks
Intent-Based Analytics: What is it?
Apstra Blog
Intent-Based Analytics: What is it?

In my previous blog on Intent-Based Networking (IBN) I tried to provide some clarity about our vision of what IBN is, and what it is not. We stressed the importance of automating each and every phase of a service lifecycle by leveraging a single source of truth. Arguably, the most important phase is “closed-loop validation in the presence of change”. And at the core of it is what we call Intent-Based Analytics (IBA). Let’s define it and dig deeper into it.

Definition

The most important aspect of IBA is its ability to reason about the change in the closed-loop fashion. And the change can take place during any of the service lifecycle phases: design, build, deploy or validate. During the design phase I may want to add a rack. Or I may want to add a new virtual network or isolation domain. During the build phase I may want  to change some IP or ASN assignments. Or swap an instance of a vendor A device with an instance of a vendor B device. During the deploy phase I may want to roll-back some of the changes. Or I want to tweak the way config is generated. But the key question is: have these changes produced the expected results?

As we mentioned earlier, changes can originate from an operator (business rule change). Could  the AWS outage of February 2017 have been prevented if the automation system had semantic reasoning about the impact of the change? You could have a semantic validation that will allow an operator for example to have at most two, or at most 10% (whichever is smaller) of spine switches placed into “maintenance” mode at any given time. And if you intend to go over that limit, semantic validation will prevent you from doing it.

In the case of operational status changes, it is even more important to reason about the changes in closed-loop. This is because (a) you are not in control of the change, as you don’t control the nature, let alone your data center gear (b) there could be thousands of these changes happening instantaneously and you need to be able to handle this at scale. But wait; operational status is not part of the intent, is it? Of course it is. Intent is not only about configuring devices using a declarative specification of intent. (For more details about Apstra’s definition of IBN see these videos from NFD16) . It is even more importantly about specifying your expectations. You expect your Spine1 switch to be up. You expect Ethernet5 on it to be down. If they are not you want the anomaly to be raised. You want the right people to be alerted. You want to reason about the change in the context of intent in an automated, programmatic way.

In the examples above, Spine1 was treated as an indispensable and unique system with identity, that can never be down. In the “pets” vs “cattle” analogy, Spine1 is your “pet”. But what if you designed your system for failure, and no system was indispensable or irreplaceable, i.e. you “routed around the failure”? You want to know how the ensemble “cattle” is doing. There is no single operational status to answer that. You need to synthesize that operational status by extracting knowledge from a multitude of raw telemetry data. And all of this in the context of intent that serves as your single source of truth. So what does “extracting knowledge” actually mean in this context?

Say for example you have a complex application “myApp”, that has multiple components running as VMs/containers on multiple servers, spread across racks in your data center. The servers fail and as a result workloads move around. Or the workload placement algorithm moves them around for optimization purposes. You don’t have “pet” servers on which your “pet” application is running. But you do want to know how your app is doing. So you may say for example, “I want a sufficiently large percentage of the links that carry the traffic of ‘myApp’ to (a) be up, and (b) not overloaded”.

Challenges and Solutions

So, what are the challenges associated with the ask above?

Challenge 1: Identifying Relevant Resources

The first challenge is identifying these links in the presence of change. You may not have the single source of truth to give you this info. Or the source of truth may not be up to date. Or it was updated but you are not notified of the change. My cabling is in spreadsheet A and my VM placement information is in system B. You could solve the above with some scripting around the limitations, but in general building an asynchronous notification mechanism from scratch is a daunting task.

Solution: In Apstra Operating System (AOS™), these complexities are handled by the core platform. So how does this information get into AOS in the first place? Consider the two scenarios. In the first scenario, AOS is not controlling the placement of workloads. If this is done externally you need to update AOS with the mapping of VMs to servers via well-defined and published APIs. This could be a simple integration. In the second scenario, you build a reference design that runs the workload placement within AOS, and as a result, AOS is aware of it, and nothing extra needs to be done.

Once the info is in AOS you can use a Live Query pattern to identify the links of interests. To put it in plain english you ask a question: “What are the links that carry traffic between VMs that comprise the application ‘myApp’, and notify me of any change from now on?”. In the Fig 1. below we see myApp consisting of two VMs that are hosted on some servers attached to leafs 1 and 2. The set of links and switches that carry the traffic of interest are highlighted in blue in the Fig. 1.

Now, suppose a change took place resulting in moving the vm2 to a server on leaf 9. There is a set of links and switches that is not carrying myApp traffic any more and these are highlighted in red in Fig. 2. And this fact will be communicated via Live Query to AOS so that the telemetry on these links is not any more included in the calculations. Additionally, the new resources, that do need to be included in the calculations (new server hosting vm2 attached to leaf 9 and associated links) are also communicated to AOS via Live Query pattern.

Now you have a continuously up to date picture of resources that matters to you. Challenge 1 solved with AOS.

Challenge 2: I don’t have the required (raw) telemetry

The more data you have and know how to reason about, the more powerful your analytics are.

Solution: AOS supports the quick and easy addition of new data via telemetry collection extensions. Look for upcoming blogs on this topic.

Challenge 3: Composing The Analytics Workflow

The next challenge is that the raw telemetry data may not be sufficient for you. It may tell you that a link is “up” now, but is that enough info given that it was possibly down for 2 minutes just before you checked? So an instantaneous state may not be what you care about, but rather some “aggregated” information about recent history. Say you wanted it to be “up” for 99.9% of time in the last 1hr and “down” not more than 1s during any “down” event. Otherwise you declare its recent connectivity status as “unhealthy”. And regarding the traffic being overloaded on the links, you may want it not to be overloaded (over certain configurable threshold) for example for more than 5s for as you are ok with occasional spikes. Otherwise you declare it “overloaded”.

Also, say you have built some resiliency into your application and can tolerate up to 10% of the links being “unhealthy” or “overloaded” (or both). Otherwise you want an anomaly, “myApp network has an issue,” raised and encapsulating all the complex knowledge extraction described above. And this extraction was implemented by a logical data processing workflow consisting of the following stages (Fig. 3 below):

Collect statuses from all links carrying “myApp” traffic
Collect traffic utilization info from all links carrying “myApp” traffic
Create recent history time series (denoted as “TS” in Fig. 3) for the two metrics above
Perform the analysis on the time series to calculate time spent in “up”/”down” states and time over the utilization threshold.
Combine the results to calculate the percentage of “unhealthy” or “overloaded” links
Raise an anomaly if more than 5% of links is “unhealthy” or “overloaded” or both

This logical analytics workflow is essentially a directed acyclic graph of stages where each of the stages is one of (a) accumulation of resources of interest, (b) post-processing of telemetry, (c ) some conditional expression that raises an anomaly.

Now, assuming you solved challenges 1 and 2 (have the right resources dynamically updated and have all the required telemetry available). You have two options (a) implement this logical knowledge extraction workflow in your mind by staring at a “dashboard” with tens or hundreds of plots, each corresponding to a stage in the Fig. 3 and doing the correlation in your mind, or (b) building from scratch your analytics workflow engine.  

Solution: Wouldn’t it be nice if you could achieve the above by designing such a logical workflow, specifying it declaratively (as a directed graph of stages) and then deploying it with a single POST request to AOS, and have the rest done automatically? Challenge 3 solved with AOS.

Ok, we got this information-rich anomaly. Is there anything else one may want to do?

Challenge 4: Automated Drill-down

Say now you identified that a number of unhealthy/overloaded links is higher than your application can tolerate. What do you want to do about it? Ideally, you may want to follow-up with the next level of investigation and troubleshooting. But to do that you again need a system that stores all relevant context (single source of truth), that you can ask the right questions (reason programmatically) in the presence of change (asynchronously). Building it from scratch? Daunting.

Solution: Again all of these features are an essential part of AOS. It takes your intent as context. It reacts to change. It synthesizes complex anomalies programmatically. And then you can react to these anomalies to automatically trigger the next level of detailed follow-ups that you could not afford (or did not want) to do all the time, across all the resources. For example given the above anomaly raised you may want to do one or more of the following, by leveraging the same mechanism used for any reasoning logic in AOS – the Live Query mechanism. It would react to creation of the above anomaly and then during the processing stage (callback function) of the Live Query:

Initiate the next level of telemetry to identify which flows are contributing to overloaded links, and try to identify elephant flows
Take a snapshot of relevant info (which other applications were running on the servers experiencing problems)
Log the info above for post mortem analysis
Any other secret technique under your belt

Best of all, these follow-up actions are simply just the next set of stages in our knowledge extraction analytics workflow. Challenge 4 solved by AOS.

Epilogue

If you were to do this yourself, in the absence of a single source of truth that you can programmatically reason in the presence of change (which is functionality provided by AOS Core) you would have to do the following:

Provide a coordination layer that will consolidate different sources of truth
Make that coordination layer capable of providing granular asynchronous notifications
Have a way to define your domain model and persist it
Have a framework to auto execute tests driven by your domain model
Build from scratch or manage a 3rd party data processing pipeline

One of my favourite quotes comes from Mike O’Dell, who was a pioneer in building likely the most sophisticated MPLS network at the time, UUNET. I attended one of his presentations about challenges in building MPLS networks and his first slide had the following text: “If you are not scared, you don’t understand.” So, if you feel scared trying to build from scratch the system described above, that is a good sign. You are not alone. Whether you really want to do it, is a different question.

Now imagine if you can achieve the above by designing such a workflow. Specify it declaratively (as a directed graph of stages), deploy it with a single POST request. Have AOS automate the collection of the telemetry, creating a data processing pipeline, resulting in actionable and context rich data, including, but not limited to raising the anomalies. Extract more knowledge while collecting less data. Or better yet, forget about the “imagine” part above, and stay tuned.

Checkout.com Comes to Boston; Stripe Makes Checkout Forms Easier: News Roundup

Payments
Checkout.com Comes to Boston; Stripe Makes Checkout Forms Easier: News Roundup
Payment Facilitor
Checkout.com Comes to Boston; Stripe Makes Checkout Forms Easier: News Roundup
Checkout.com comes to Boston while new partnerships for Square and Wirecard enhance growth opportunities for cashless commerce. And a Pivotal investment drives its expansion efforts in fintech capabilities. Here’s your weekly news roundup.

Graph-Based Live Queries in AOS™

Networks
Graph-Based Live Queries in AOS™
Apstra Blog
Graph-Based Live Queries in AOS™

In our product – AOS™ – we create and manage a model that represents a single source of truth regarding infrastructure, policies, constraints etc. This model is subject to constant change and we need to query it for various reasons, and we represent it as a graph. With the graph, all information about our network is modelled as nodes and relationships between them.

Every object in a graph has a unique ID. Nodes have a type (which is a string) and set of additional properties based on a particular type. For example, all switches in our system are represented by nodes of type “system” and can have a property “role” which determines which role in network it is assigned (spine/leaf/server). Physical and logical switch ports are represented by an “interface” node, which also has a property called “if_type”.

Relationships between different nodes are represented as graph edges which we call “relationships”. Relationships are directed, meaning each relationship has a source node and a target node. Relationships also have a type which determines which additional properties particular relationship can have. E.g. “system” nodes have relationships of type “hosted_interfaces” towards “interface” nodes.

A set of possible node and relationship types is determined by a graph schema. The schema defines which properties nodes and relationships of particular type can have along with types of those properties (string/integer/boolean/etc) and constraints. We use and maintain an open source schema library, Lollipop that allows flexible customization of value types.

Going back to the graph representing a single source of truth, one of the most challenging aspects was how to reason about it in the presence of change, coming from both the operator and the managed system. In order to support this we developed what we call “Live Query” mechanism which has three essential components:

Query Specification
Change Notification
Notification Processing

Query Specification Having modelled our domain model as a graph, it is important to be able to find particular patterns (subgraphs) in a graph.

You do that by running searches on a graph specified by graph queries. The language to express the query is conceptually based on
Gremlin, which is an open source graph traversal language. We also have parsers for queries expressed in another language –
Cypher, which is a query language used by popular graph database
neo4j.

You start with a
node() and then keep chaining method calls, alternating between matching relationships and nodes:

node(‘
system‘).out().node(‘
interface‘).out().node(‘
link‘)

The query above translated in English reads something like: “starting from a node of type
system, traverse any outgoing relationship that reaches node of type
interface, and from that node traverse all outgoing relationship that lead to node of type
link”.

At any point you can add extra constraints:

node(‘
system‘, role=’
spine‘).out().node(‘
interface‘, if_type=’
ip‘)

Notice role=”
spine” argument, it will select only “
system” nodes that have “
role” property set to “
spine”. Same with “
if_type” property for “
interface” nodes.

node(‘
system‘,
role=is_in([‘
spine‘, ‘
leaf‘]))

.out()

.node(‘
interface‘, if_type=ne(‘
ip‘))

That query will select all “system” nodes that have role either “spine” or “leaf” and “interface” nodes that have “if_type” anything but “ip” (“ne” means “not equal”).

You can also add cross-object conditions which can be arbitrary Python functions:

node(‘
system‘)

.out().node(‘
interface‘, name=’
if1‘)

.out().node(‘
link‘)

.in_().node(‘
interface‘, name=’
if2‘)

.in_().node(‘
system‘)

.where(
lambda if1, if2: if1.if_type != if2.if_type)

You refer to objects by giving them names and using those names as argument names for your constraint function (of course you can override that but it makes a convenient default behavior). So, in example above it will take two “interface” nodes named “if1” and “if2”, pass them into given “where” function and filter out those paths, for which function returns False. Don’t worry about where you place your constraint: it will be applied during search as soon as all objects referenced by constraint are available.

Now, you have a single path, you can use it to do searches. However, sometimes you might want to have a query slightly more complex than a single path. To support that, query DSL allows you to define multiple paths in the same query, separated by comma(s):

match(

node(‘
a‘).out().node(‘
b‘, name=’
b‘).out().node(‘
c‘),

node(name=’
b‘).out().node(‘
d‘),

)

This match() function creates a grouping of paths. All objects that share same name in different paths will actually be referring to the same object. Also, match() allows adding more constraints on objects with where(). You can do a distinct search on particular objects and it will ensure that each combination of values is seen only once in results:

match(

node(‘
a‘, name=’
a‘).out().node(‘
b‘).out().node(‘
c‘, name=’
c‘)

).distinct([‘
a‘, ‘
c‘])

This matches a chain of
a ->
b ->
c nodes. If two nodes “
a” and “
c” are connected through more than one node of type “
b”, the result will still contain only one (“
a”, “
c”) pair.

There is another convenient pattern to use when writing queries: you separate your structure from your criteria:

match(

node(‘
a‘, name=’
a‘).out().node(‘
b‘).out().node(‘
c‘, name=’
c‘),

node(‘
a‘, foo=’
bar‘),

node(‘
c‘, bar=
123),

)

Query engine will optimize that query into:

node(‘
a‘, name=’
a‘, foo=’
bar‘)

.out().node(‘
b‘)

.out().node(‘
c‘, name=’
c‘, bar=
123)

)

No cartesian product, no unnecessary steps.

Change Notification Ok, now you have a graph query defined. What does a notification result look like? Each result will be a dictionary mapping a name that you have defined for a query object to object found. E.g. for following query

node(‘
a‘, name=’
a‘).out().node(‘
b‘).out().node(‘
c‘, name=’
c‘)

results will look like {‘a‘: <node type=’a‘>, ‘c‘: <node type=’c‘>}. Notice, only named objects are present (there is no <node type=’b‘> in results, although that node is present in query because it does not have a name).

You register a query to be monitored and a callback to execute if something will change. Later, if someone will modify the graph being monitored, it will detect that new graph updates caused new query results to appear, or old results to disappear or update. The response executes the callback that is associated with the query. The callback receives the whole path from the query as a response, and a specific action (added/updated/removed) to execute.

Notification Processing When the result is passed to the processing (callback) function, from there you can specify reasoning logic. This could really be anything, from generating logs, errors, to rendering configurations, or running semantic validations. You could also modify the graph itself, using graph APIs and some other piece of logic may react to changes you made. This way, you can enforce the graph as a single source of truth while it also serves as a logical communication channel between pieces of your application logic.

The Graph API consists of three parts:

Graph management – methods to add/update/remove stuff in a graph.
add_node(), set_node(), del_node(), get_node()
add_relationship(), set_relationship(), del_relationship(), get_relationship()
commit()

Query
get_nodes()
get_relationships()
Observable interface
add_observer(),remove_observer()
Graph management APIs are pretty self explanatory.

add_node() creates new node

set_node() updates properties of existing node

del_node() deletes a node

commit() is used to signal that all updates to the graph are complete and they can be propagated to all listeners.
Relationships have similar API.

The observable interface allows you to add/remove observers — objects that implement notification a callback interface. Notification callback consists of three methods:
on_node(),
on_relationship() and
on_graph(). Methods on_
node() and
on_relationship() are called when any node/relationship is added, removed or updated.
on_graph() is called when the graph is committed.

The Query API is the heart of our graph API and is what powers all searching. Both
get_nodes() and
get_relationships() allow you to search for corresponding objects in a graph. Arguments to those functions are constraints on searched objects. E.g.
get_nodes() returns you all nodes in a graph,
get_nodes(type=’
system
‘) returns you all “
system” nodes,
get_nodes(type=’
system
‘, role=’
spine
‘) allows you to constrain returned nodes to those having particular property values. Values for each argument could be either a plain value or a special “property matcher” object. If the value is a plain value, the corresponding result object should have it’s property equal to the given plain value. Property matchers allow you to express a more complex criterias, e.g. “not equal”, “less than”, “one of given values” and so on:

Property matcher example:

graph.get_nodes(

type=’
system‘,

role=is_in([‘
spine‘, ‘
leaf‘]),

system_id=not_none(),

)

In your graph schema you can define custom indexes for particular node/relationship types and the methods
get_nodes() and
get_relationships() pick the best index for each particular combination of constraints passed to minimize search time.

Results of
get_nodes()/get_relationships() are special iterator objects. You can iterate over them and they will yield all found graph objects. You can also use APIs that those iterators provide to navigate those result sets. E.g.
get_nodes() returns you a
NodeIterator object which has methods
out() and
in_(). You can use those to get an iterator over all outgoing or incoming relationship from each node in the original result set. Then, you can use those to get nodes on the other end of those relationships and continue from them. You can also pass property constraints to those methods the same way you can do for
get_nodes() and
get_relationships().

graph.get_nodes(‘
system‘, role=’
spine‘) \

.out(‘
interface‘).node(‘
interface‘, if_type=’
loopback‘)

The code in the example above finds all nodes with type “system” and role “spine” and then finds all their loopback interfaces.

Putting It All Together

@rule(match(

node(‘
system‘, name=’
spine_device‘, role=’
spine‘)

.out(‘
hosted_interfaces‘)

.node(‘
interface‘, name=’
spine_if‘)

.out(‘
link‘)

.node(‘
link‘, name=’link’)

.in_(‘
link‘)

.node(‘
interface‘, name=’
leaf_if‘)

.in_(‘
hosted_interfaces‘)

.node(‘
system‘, name=’
leaf_device‘, role=’
leaf‘)

))

def process_spine_leaf_link(self, path, action):

“””

Process link between spine and leaf

“””

spine = path[‘
spine_device‘]

leaf = path[‘
leaf_device‘]

if action in [‘
added‘, ‘
updated‘]:

# do something with added/updated link

pass

else:

# do something about removed link

pass

 

The Dirty Truth about Intent Washing

Networks
The Dirty Truth about Intent Washing
Apstra Blog
The Dirty Truth about Intent Washing

Any new technology in Silicon Valley will go through a phase where defining precisely what it is can be difficult to do. New ideas need new words to describe them, to keep these ideas distinct in people’s minds. In spite of this, many people will not have heard about any given new and emerging concept. This creates a window of time where existing vendors with existing products will invariably attempt to further blur an already blurry line in the minds of potential customers about what something new really is. We call this “washing.” For instance, “cloud-washing” is when something that really isn’t “cloud” is talked about as if it were.

Intent-Based Networking (IBN) is currently undergoing the same treatment. Many companies have announced intent-based initiatives or products. They are using the language and the surrounding ideas to talk about existing products that simply were not designed from the ground up with Intent as a guiding light.

At Apstra, we know that Intent-based systems are still emerging. Yet, we are delivering the world’s first and only vendor-agnostic, Intent-Based Networking System. From day one, our Apstra Operating System (AOS™) was built with Intent as our guiding light. Whether we are talking about our flagship data center management solution, or about AOS as a platform for others to develop their own apps, Intent has been the primary driving theme. Intent is what elevates Network Engineers and Automation Developers above the complicated minutia inherent in networking.

It’s been fascinating watching “Intent-Washing” happen right before our eyes every day in social media and at industry events. So we did something fun to shine a light on it. Please enjoy this short video we made about “Intent-Washing.”

And if you are really serious about understanding the concepts behind next-generation networks, read Sasha Ratkovic’s excellent blog “Intent-Based Networking: What Is It?

…while you’re at it, please check out some of our other videos and articles. Schedule a demo and see first hand what AOS can do for you.