Network services provider Lumen Technologies is building on a 2024 busy with major partnership deals by taking a wide-angle look at how the company can help Federal government agencies – whose networks not only need modernization beyond their decades-old foundations but also optimization to benefit from the next wave of artificial intelligence deployments as Federal government policy AI evolves with the new Trump administration.

That’s the new year’s vision from Josh Finke, who joined Lumen in 2024 as Senior Vice President, Public Sector. Finke sat down with MeriTalk as the last year came to a close to talk about what the company is bringing forward to help Federal agencies as the new year kicks off.

Josh Finke, Lumen

MeriTalk: What’s Lumen working on now to better serve Federal government customers?

Finke: Since joining Lumen last August, my number one priority is ensuring that the company is the go-to partner for enabling public sector agencies’ missions. How we are doing that is with a focus on driving simplification and efficiency. The simple reality is that we operate in a world of increasing technological, political, and mission complexity, and anything and everything that we can do to simplify the path to success for mission outcomes is paramount to our joint success with our government customers.

While the Lumen brand is maybe not as well known in some portions of the government space, we are the largest ultra-low latency network in the world and the backbone for the most critical networks within our Federal agency partners, and the announcements that we’ve recently made on our hyperscaler partnerships represent the largest network buildout in multiple generations.

MeriTalk: What’s a good example of helping government agencies simplify their tech?

Finke: As you look at the transition that agencies have been going through from the old Networx communications services contract to the current Enterprise Information Solutions (EIS) contract, there are a couple of pieces that naturally introduce further complexity around contract vehicle migration and technology migration, and we’re working to simplify that process across a number of fronts.

One of them being that most public sector network infrastructure has been out there for quite some time, and some agencies are leveraging networks that had copper installed in the 1980s or 90s all the way up through modern fiber installations. Being able to have a simplified network design that provides redundancy, increased bandwidth, and security is paramount to everything that they’re doing in a manner that’s easy to consume and even has capabilities such as automated ordering and provisioning. At a high level, that’s one way we are driving simplification.

A second component to driving simplification is helping customers understand that it doesn’t have to be separate technology stacks that they’re consuming. The network can have security built in, the network can have redundancy built in, and the network can have application intelligence built into it so that we’re no longer looking at having to do procurements for separate stacks.

Zero trust security is a great example of that. If we can build security into everything from the start, from where a user accesses information all the way through the data center – whether that’s on-premises or in the cloud – those are areas where you can greatly simplify the approach that agencies have taken for the past 30-40 years.

MeriTalk: During the past year Lumen has announced some big partnership agreements with Microsoft, AWS, Google Cloud, and other companies geared toward meeting increased demand for data center and AI services. What’s the differentiator for an improved AI network arrangement versus more standard network services deals?

Finke: There are a number of really key and critical differences that are coming in with these partnership announcements that we’ve made, and in particular, how they provide a benefit to our Federal agency customers and partners. One of the first is that we’re driving more intelligence into the network than has ever been there. For a long time, you could look at dark fiber as dumb pipes, but what we are doing now is putting a lot more intelligence into the fiber infrastructure.

We’re deploying security technologies that are focused on not only network security, but all the way down to layer one, where we are providing physical security on the network. We’ve partnered with Corning on the ability to detect ground vibration that is not even necessarily right at the fiber strand but in proximity, so that we have better intelligence around what could be affecting fiber routes, and so we can see if there is a potential disturbance that we can proactively stop. That’s where it starts, at the base layer, at the foundation.

On top of that we are able to detect very minor impacts on the fiber and put in technology now that allows us to have much greater path diversity by leveraging not only new physical buildouts but also logical networking.

What we’re doing now with the hyperscalers and with our largest Federal customers is working collaboratively to understand where they have network demand, and instead of building out one-offs to meet those demands, we’re building an AI fabric that has greater redundancy, greater reliability, and lower latency.

We’re doing this because we understand that with AI and the workloads and data that are being consumed, not only do you have to be as close to the data as possible, but you need to ensure that access is always on. So, we’re looking at the total environment and saying, here’s what each of the hyperscalers needs, here’s what the largest Federal agencies need, here’s what the largest enterprise customers need, and building in points of access and presence that create a resilient, secure fabric that ensures redundancy and scalability for decades to come.

The final piece of what differentiates a modern network is we’re spending a lot of time on increased security. We’re leveraging technologies like quantum encryption, being able to ensure that end-to-end and at all points, not only can we encrypt what’s traveling over our network, but we can do it in a way that provides a secure foundation that goes so much further than just zero trust.

MeriTalk: That sounds like a lot of work optimizing for AI. Can you share with us Lumen’s outlook on Federal government AI policy, both as it exists in the waning days of the Biden administration and what you might foresee as it changes during the incoming Trump administration?

Finke: When I look at the current state of where we are versus where we’re going, we see that the AI executive order that came out in 2023 worked to address the fact that agencies needed to appoint somebody to be a chief AI lead and that certainly makes sense. The order also signaled that we can’t let AI get past where we are in technology development.

Where we really want to help is in AI being considered as part of multiphase approach. The first phase is what can AI do really well for an agency now. AI shouldn’t be considered to be an end state; it should be part of a technology modernization journey. One of the things that I have seen cause not only a delay but also a lack of real adoption of AI within agencies of all sizes has been that AI is viewed as an end state that will drive an efficiency metric.

But in my mind, the thinking needs to be, what can AI do for us right now to drive efficiency, to start driving automation and some of the workflow processes we have.  But more importantly, AI adoption needs to provide predictable, measurable, and reliable outcomes for agencies.  That state is still further down the line.

What we’re seeing is that many in the hyperscaler enterprise space have started that work, they’ve taken AI through the training process, and now they’re starting to reach the inference stage. In most of the Federal space, there are varying levels of how much progress there’s been. We need to partner with the agencies, to focus on the mission outcomes desired and not be distracted by an ideal end state of what AI could be taking the proper steps through adoption.

We need to take a look at what AI is now, how connectivity is driving the need for additional bandwidth, how we’re looking at security around that, and then turn that into a journey, that leads to verifiable progress in driving efficiency.

MeriTalk: Any other advice for the government on furthering its AI capabilities?

Finke: What the generative AI ethos and potential for its benefits promise is an environment where people are providing less of the inputs and levers, and more about AI starting to understand what needs to be accomplished and allowing it to go target a workload and just that a fully automated, and more reliable function.

In light of this, I believe the opportunity and the challenge at the same time within the Federal space is rationalizing all of government’s data to understand what we can learn from the data already available. The vast amount of data that’s been accumulated over decades within the Federal space could be leveraged to provide informed insights moving forward, that’s an area that I think AI policy needs to address. We need to architect and enable a unified approach for where data is stored, and how do it’s accessed in the fastest, most reliable and most secure way, and that is exactly where the scale, security and capabilities of Lumen’s network come into play.

Read More About
About
John Curran
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags