Tech ONTAP Podcast Episode 398 – NetApp Hardware Announcements: Fall 2024


NetApp recently announced a slew of new hardware platforms, including the impressive A90 system, as well as some NetApp BlueXP additions, such as the new Workload Factory.

The NetApp A-Team was kind enough to join us to discuss these exciting new announcements.

Featuring:

For more information:

Finding the podcast

You can find this week’s episode on Soundcloud here:

There’s also an RSS feed on YouTube using the new podcast feature, found here:

You can also find the Tech ONTAP Podcast on:

Transcription

The following transcript was generated using Descript’s speech to text service and then further edited. As it is AI generated, YMMV.

Tech ONTAP Podcast Episode 398 – NetApp Hardware Announcements Fall 2024
===

Justin Parisi: This week on the Tech ONTAP Podcast, we invite the NetApp A Team to talk about the latest NetApp hardware announcements.

Intro/Outro: [Intro]

Justin Parisi: Hello and welcome to the Tech ONTAP Podcast. My name is Justin Parisi. I’m here in the basement of my house and with me today I have some special guests from the NetApp A Team and we’re going to talk about some new NetApp announcements that just got released. So to do that we’ll start off with Pascal de Wild. He’s our resident NetApp person on the call here. So Pascal What you do here at NetApp, how do we reach you?

Pascal de Wild: I am an ATS at NetApp. You can reach me on LinkedIn. If you look for me, you’ll find me. I’m based in Netherlands and I serve our central government customers.

Justin Parisi: Alright, excellent. Also with us here today, we have Ryan Beaty. Ryan you’re a member of the NetApp A team, I guess for, I don’t know, like since almost the beginning. Or It was the beginning, wasn’t it?

Ryan Beaty: Yep, there’s the beginning.

Justin Parisi: Yeah, you’re like an OG.

Ryan Beaty: Just a couple of us. Yeah.

Justin Parisi: So what do you do and how do we reach you?

Ryan Beaty: I’m a senior systems architect over at Red8.

My handle is ONTAPRyan for X, I guess now, instead of Twitter. But yeah, I’m a Senior Systems Architect over at Red8. Been here for about seven years now.

Justin Parisi: If you’re not familiar with the NetApp A Team, that is something that has been going on for a while. So Ryan tell us about the A Team and how it’s evolved since you started.

Ryan Beaty: Well, it’s gotten a lot bigger, that’s for sure. So basically Samantha Moulton, she reached out to me, I don’t know, about 10, 11 years now, I guess. And basically it’s an advocacy group for NetApp. The reason why she reaches out to the people here that are on this team is because they’ve done something to show that they are willing to do blogs podcasts like we’re doing right now, right?

That’s part of the team is to get the word out for NetApp. That’s the whole point of it.

Justin Parisi: Yeah, it’s also something that gives credibility because if NetApp saying it, it’s one thing. If NetApp partners and people who use this stuff are saying it, that’s another. That’s more of a testimonial type of approach as opposed to tooting your own horn. And you guys are everywhere, Insight, that sort of stuff. Also with us, Richard van Dantzig.

So Richard, what do you do and how do we reach you?

Richard van Dantzig: Hi, Justin. I work as a consultant at ITQ in the Netherlands. I’m about 20 years within the consultancy business with NetApp. I can be reached on LinkedIn, or on X at rvandantzig, or my blog dive-virtual.com.

Justin Parisi: All right. Richard, have you ever heard of the band, Danzig?

Richard van Dantzig: Yeah, I know.

Justin Parisi: Yeah? Yeah? Were you named after them?

Richard van Dantzig: No, I wasn’t.

Justin Parisi: Are you their child?

Richard van Dantzig: They are missing a T in their name.

Justin Parisi: They are missing a T. Alright also here today, Oliver Fuckner is here. Oliver, did I say that right, Oliver? I hope I said that right.

Oliver Fuckner: Yeah, hi, my name is Oliver. I work for Atruvia. We have German banks as customers and we are doing Metro Clusters and StorageGRID. And of course, you can find me on LinkedIn and also on Twitter, but I don’t use it regularly anymore.

Justin Parisi: I see Oliver all over LinkedIn, showing off his shiny new hardware.

He’s very proud of it.

Oliver Fuckner: Yes, yes, sure.

Justin Parisi: Are you guys jealous of his hardware?

Richard van Dantzig: Very jealous.

Justin Parisi: That’s what he’s going for. It’s like keeping up with the Olivers or something. I don’t know. So also last but not least, Scotty Gelb. Scotty is in the airport.

Scotty, what you doing? How do we reach you?

Scott Gelb: Good. Thanks, Justin. I’m Scott Gelb with Enterprise Vision Technologies in Santa Monica, California. Covering enterprise accounts. I’ve been here nine years, been working with NetApp for wow, 24 years. My LinkedIn is sgelb, X is ScottyGelb, and my blog is Storage Exorcist.

Justin Parisi: All right, so you know where to reach these guys. We have a full house here today and the reason why is because NetApp just announced a boatload of new things. The first thing that we want to start off with here is Capacity. So tell us a little bit about the Capacity announcements Pascal.

Pascal de Wild: So, what we did is we’ve refreshed our capacity flash systems where we used to have the same systems in our A series and our C series. We now announce two 2U systems and a 4U system to provide our customers with more capacity. In the new systems we can enter 60 terabyte drives, we can do over a petabyte in 2U usable storage, while just using 800 watts of power. So it’s sustainable, it’s very low footprint from a rack space, and it’s very fast. We’ve got the bigger system, the 4U system, that has the 48 drives in it. Well, I mean, Oliver showed us the hardware. You can put a lot of space in there. So Oliver, when are you going to use it?

Oliver Fuckner: Yeah, we’ve recently refreshed our DR systems and yeah, we went from fast systems Almost one rack of stuff to the newer C Series and 4U systems, so it’s a lot, lot less rack space we use, a lot less storage power we use, and it’s much faster than before.

It’s great systems.

Justin Parisi: So you said much faster than before. So what were you using before? What are you upgrading from?

Oliver Fuckner: We had a FAS9000 and we had five shelves of eight terabytes SATA drives and a flash pool for speeding up things.

Justin Parisi: So it sounds like this new capacity systems targeted for the FAS 9000s, the giant systems with the SATA drives we’re trying to go more flash consistent here. Now when you say it’s much faster is that because of the SATA limitation or is that because of something else?

Oliver Fuckner: Well, mostly because the SATA drives are just small, we hit, I don’t know, 80 percent IO busy and was a pain and deleting volumes was really, really slow and it’s got much faster and the SnapMirror updates are very, very fast and overall the performance is much better.

Ryan Beaty: What system did you end up going with, Oliver?

Oliver Fuckner: We now use the C800. The newer systems were not available when we upgraded.

Pascal de Wild: Yeah, even because we’re talking about DR now, but the capacity flash systems, they’re, as Oliver says, that fast. You can use them in production as well. So any generic VMware or file workload will run very fast on these capacity systems.

Ryan Beaty: Yeah, I’ll take that a step further.

I know it’s not the topic, but when you start sizing with Keystone. It’s kind of funny how they end up placing you on C series a lot of the times because it’s all you need. And that’s NetApp guaranteeing the performance. That’s kind of crazy.

Oliver Fuckner: Yeah, that’s absolutely true.

Justin Parisi: You mean we’re not trying to upsell you on an AFF 800, A900 every time?

Pascal de Wild: No, no, no, especially in Keystone. We tend to also look at the customer data center and their footprint and their power usage.

Justin Parisi: So, with these flash systems, are they using your basic normal flash SSD or are they using the capacity type of flash where it’s the slower connection, but bigger capacity pieces?

Pascal de Wild: Now, we’re still using the QLC drives, so the quad level cell drives. So you get a bit higher latency than compared to our A series but like I said it’s still within the two, three millisecond range, so you can run a whole ton of workloads on it.

Scott Gelb: And C800 has so much memory.

Anything that’s cached, much faster response time also.

Pascal de Wild: Yeah, and it’s even better in the new systems, Scott. It’s insane.

Justin Parisi: I find that with performance, people tend to fixate on those end numbers, right? Like, how many IOPS can I get? How much throughput can I get? And a lot of the times, you’re not really hitting that, ever.

So, go with what you need. Yeah. Save yourself some money, save yourself some heartache. And then if you do need to expand out, we do allow you to add on new nodes to the cluster. So then you have additional capacity that you can add as you need it, as opposed to buying a giant system right up front and thinking that you’re ever going to fill that up.

Pascal de Wild: Yep, totally.

Justin Parisi: So as far as the capacity systems go, what sort of use cases do you see for your customers with this, Scott? How are you positioning this to them?

Scott Gelb: We have quite a few customers that are used to 10K SAS drives, so It’s a pretty easy job because QLC is a direct replacement and much faster, but like we discussed earlier, we’re also finding customers that don’t need that SAP HANA type one millisecond response.

So we’re able to put C series in even places where we had A series before. So it’s a nice middle ground. we’re still also doing some FAS with 22TB drives when capacity at the lowest cost makes sense, but I would say C Series has been such a great product for us. It’s our fastest and highest percentage sales.

The majority of The clusters that we’re integrating now are C series.

Justin Parisi: Yeah, it definitely fills a market segment that I think that we’ve been missing for a while, which is that kind of mid range SMB type of businesses where you don’t need the heavy hitters, but you also don’t necessarily want the SATA drives because those aren’t going to be something necessarily you want to use in production.

What, The C Series allows you also to do is leverage it for a SnapMirror destination. So now you can use a low cost system and still guarantee that you’re going to have pretty good performance if you have to fail over, even if it’s from an A Series.

Scott Gelb: Absolutely. Yeah. It’s been the most successful NetApp product launch from our perspective, I think, for NetApp also.

Justin Parisi: Dr. Ryan, do you concur?

Ryan Beaty: Oh yeah, absolutely. Yeah. We’re selling C Series all the time. If it’s a really small footprint, it’s really hard to get them into a C series, like some of my clients, They have many systems, but many are small. So if that makes any sense they’re kind of isolated to the business units.

So, in those cases, a small A series will actually fit better from a price perspective because they’re not needing to pay for all that storage. But overall, C series is pretty much what’s getting pushed right now. Not from our standpoint, just because of the sizing standpoint, we don’t need an A series.

Justin Parisi: Yeah, makes sense. So, Richard, you wrote a blog recently on the mid range systems that NetApp have released for their recent announcement. So, can you go over that blog post, tell us about what you’ve seen with the mid range systems and what sort of ideas you have about it?

Richard van Dantzig: Yeah, last Monday the announcements came out with the new AFF A series.

So, the new A20, A30, A50. Yeah, it’s delivering. For us, especially in the Netherlands and in Europe, the entry point of flash systems is just suitable for the bigger customers in the Netherlands. enough capacity and performance to deliver their workloads and still have room to grow to bigger systems in the future.

And especially combining it with the C series, In the same cluster, it gives them really nice possibilities for all workloads.

Justin Parisi: So it sounds like the mid range systems may have a little bit of overlap with the capacity systems, right? So when you’re talking to a customer, how do you differentiate when you need to give them a capacity system versus one of the mid-range systems?

Because you don’t necessarily want to have this confusion with the overlap.

Richard van Dantzig: Yeah. The most important part of it is the latency. And a lot of customers don’t know what kind of LA they need to expect for their applications. I have tons of conversations with customers.

They say, yeah, my application is LA Latency sensitive. Is that really sub millisecond, or is it one or two milliseconds is enough? Because when they come from an from an older system which is now three to five years old, yeah, mostly an entry level AFF will be suitable, but possibly also in big higher end series also.

What we mostly do is try to make a mix, so have both capacity and performance notes in one cluster so you can differentiate really on workload and not that much only have one system for everything.

Justin Parisi: So what do the mid-range systems include?

Richard van Dantzig: The entry level, the A 20. that just announced. They are still speaking about having about 23 percent increase over the A150, and still offering 15 terabytes of raw data. So you can easily go up to three petabytes of effective storage, and that’s only in 2U rack units.

Justin Parisi: So the mid range systems sound like they’re going to be designated with the A letter and then also under 100 numbers like the 20s, 30s, 80s, 90s. That sound about right there?

Richard van Dantzig: Yeah, and it’s complementing the A70, A90 and the 1K which was introduced at Insight

Justin Parisi: So Pascal, my understanding about the A90 is that they’ve added a functionality that is only specific to those types of platforms for performance. Have you seen any of that?

Pascal de Wild: Yeah. These are the Vino systems. One of the things we did change is start using the Intel QAT instruction set to offload compression and deduplication and things like that.

So we’re increasing the throughput per CPU core and thus the throughput per single volume. In the newer ONTAP releases, we did change a lot in the D-Blade to make things go from different kind of queues also increasing the volume throughput, which is way higher than it was before. The A90 is a high end system, the performance is a beast. It’s not comparable to the old systems.

Justin Parisi: Yeah, the architecture itself is adjusted to allow these higher throughput numbers for single volumes. Because for a while, all you could do is use a FlexGroup volume, but a FlexGroup volume doesn’t cover all your use cases, right?

Sometimes you need a FlexVol in certain situations. And if you have a FlexVol need, then the A90 is probably a good place to look if you really want that performance and latency consideration.

So anyone else have any thoughts on the mid range systems that have been released? The A series, 90s, 80s, 70s, whatever, right?

Ryan Beaty: The only thing that I would leave it with is, we were saying is the performance has drastically increased from the older outgoing A250 and A400s. In my opinion, you could be running an A400 today and go down to the A30s.

Because the A30s are about the same performance as an A400.

Justin Parisi: Yeah, you see that with your laptops too, right? You get a new laptop in two years and that thing already blows the other one away. The lower model can blow away the high models. And that’s just the progression of hardware, the progression of technology, and it’s just something you get used to.

Ryan Beaty: Yeah, now it’s come a long ways from the Intel Celerons I used to use.

Justin Parisi: God, you’re old.

Ryan Beaty: I know.

Scott Gelb: I remember the Alpha MIPS in the 700 series.

Justin Parisi: Alpha MIPS.

Oliver Fuckner: S270. Yeah.

Justin Parisi: Oh my God. You guys are so old.

Scott Gelb: 270 was Broadcom.

Justin Parisi: All right, nerds. Let’s talk about BlueXP now and Cloud Volumes ONTAP. So Ryan you have some experience with this field in this area.

So tell us about the new advancements with BlueXP and Cloud Volumes ONTAP.

Ryan Beaty: I guess I’ll start with Workload Factory. That’s probably the coolest one. If you haven’t played with Workload Factory, go to labondemand. net. com, play around with it there’s a lab there. I know in Insight you had to ask for it, so you had to get with your NetApp rep, but there is a lab there for Workload Factory, but you may need to get permission to get it because I think it’s a real lab, not a simulated lab.

But it basically will set up an entire environment for you to use AI. So, in the Insight lab they had you work with some LLMs, you could choose your LLM, Throw some files in there and then on the chat, you could ask the chat, hey, what are the sessions that are going on at NetApp Insight?

And then it would list them out for you. You could talk to it. It took 30 40 minutes to completely set up a whole AI environment inside of AWS. It’s pretty cool. So go check that out. Workload Factory is in there now. But basically what that does is it’s going to help set up the environment for you.

So you don’t have to learn how to do it all from scratch. It’s pretty slick.

Justin Parisi: Where’s the fun in that? I mean, we need to learn everything from scratch ,

Ryan Beaty: right?

Justin Parisi: Why are you doing it for me? NetApp ?

Ryan Beaty: If you’re familiar with Insta Cluster it’s kind of like where it does the whole wrapper for you, but it’s pretty cool.

So yeah, there’s workload factory, there’s the Ransomware Protect stuff. They’ve added in SIEM technology with Sentinel. So all your logging and stuff from Ransomware Protect can go into Microsoft Sentinel and help out your, ransomware recovery projects, issues that you have it’s all logged in the same spot.

So that’s pretty cool. And then Workload Factory also added some things called guardrails. I haven’t played with that at all yet, but basically It’s for your generative AI projects that you have. Picture PCI data or, sensitive data like HIPAA data getting flagged and masked upon processing.

That way you can feel a little bit better that you’re not putting out private data inside your gen AI projects. So It monitors all those files and reads them in flight and masks as it needs to, which is pretty cool. There’s some other things I haven’t used. I don’t know if anyone else has used it, maybe they can speak on it, but the software build studio, I haven’t dug into that one, that wasn’t announced, and then the database continuous optimization, I haven’t played with that either.

Pascal de Wild: We secretly, sneakily introduced the multi tenancy part into BlueXP, right? Which is a major re owns, yes.

Ryan Beaty: All the names have changed. Oh, yeah. They’re calling things so, yeah, there’s a name change as well.

Justin Parisi: Well, if you change the name, it’s like a brand new feature, right?

Scott Gelb: Cloud insight is now digital insight infrastructure, right?

Justin Parisi: I don’t think it’s an accident that most of the name changes that we’ve had are centered around intelligent data and AI, right? Because that’s the hot thing now. That’s what everybody’s interested in now. And let’s actually touch on that because Ryan mentioned the generative AI stuff. How many of you are seeing your customers either explore or implement generative AI into their workflows?

Ryan Beaty: I think the problem is they don’t know where to start. And then when they just start on their own and they don’t have help, it gets frustrating and then they stop. So, I think people want to do it. It is pretty new. So I think people are just trying to figure out how to do it, which is where workload factory comes in.

Like, hey, we’re going to set it up for you. You can play around with it. This is what it should look like. At least it gives them an idea.

Scott Gelb: Yeah, we have a AI practice and a lot of it is large language and small language model based. So we have a small team that’s been going to customers and helping them build, whether it’s LLAMA, whatever model they want to build on premises instead of having it in public.

Richard van Dantzig: Yeah, we just announced a new service so we can do a POC for our customers based on GenAI.

Justin Parisi: Okay, so we’re seeing it a lot. Oliver, what about you? Are you seeing that where you are?

Oliver Fuckner: Well, we are doing stuff for banking customers. We are still very in the beginning phase of doing AI stuff. It’s slowly starting, yes, but it will take one or two years.

Justin Parisi: Well, especially because you’re in the financials, right?

That’s going to be very tricky with all the regulations and the PII stuff out there. So I would imagine that the use cases for generative AI in the financials probably centers more around getting a handle on what you have out there for data.

Scott Gelb: I have a question for Oliver. What’s more popular in Germany? Is it MetroCluster or David Hasselhoff?

Oliver Fuckner: Well, I have to think about that one.

Justin Parisi: Is Hasselhoff still, is he still popular? Has that ship sailed? Yeah. I thought that would be done, you know, after the weird drunk hamburger thing, I thought that would be it.

Scott Gelb: Yes. Yes. But he came out of that all clean now. He’s an upstanding man.

Justin Parisi: He’ll always be my Knight Rider.

Oliver Fuckner: But we are still in a phase where our lawyers tell us how to do this AI stuff. And after they are finished, then we will try to explore and experiment with it and see what we can do with it and how our use cases are, but. It’s still in the lawyer’s phase for us.

Justin Parisi: It’s also still in the regulatory phase. I mean, I guess the new EU AI stuff just came out. I’m sure that’ll be ever changing as they discover new loopholes and use cases they need to fill. So it’s very nascent. And it’s gonna be a tricky landscape for people. And having something like a workload factory is gonna be useful because it allows you to play around with it without having to risk anything on your end.

Richard van Dantzig: Yeah, and AI is not an infrastructure. Party. It’s mostly really the data at the customer side and the people who will use it. So they have to decide what they want to do with it and then see what the infrastructure can do for it.

Ryan Beaty: Still waiting for Data Explorer to come

Justin Parisi: what is Data Explorer? Can you tell me what that is?

Ryan Beaty: It’s part of the BlueXP portfolio. It basically tags all of the data in your environment. So Anywhere Blue XP can see, it basically can tag those things.

So let’s just say you wanted to do a search on every PDF inside your company, right? Whether that data is in Australia, or whether it’s in New York, or London, it’ll actually show you all of the PDFs in your entire company, and you can start utilizing that for AI, so if you want to do inferencing on certain things, you know where your data is located at.

So that helps get your mindset around, here’s where the data is that we’re trying to inference on. So, what do we need to do? Where do we need to put it? Because that’s the biggest thing with AI, right, is data locality. If you know where the data is, then you can figure out how to move it.

I think knowing where it is first is really going to help out a lot of people.

Justin Parisi: Yeah, that’s a huge problem with unstructured data is just simply knowing where everything is. Knowing if you have 18 million different duplicate items is another thing, right? I know dedupe takes care of a lot of that, but you still don’t want a bunch of that floating around because you get data spillage.

You get people looking at things they shouldn’t look at. There’s so much out there and there aren’t enough employees to be able to sort through all that. And generative AI, I think, has the biggest implication for most customers, most end users is finding that data sorting it and tagging it.

Scott Gelb: Yeah, that’s a great point. And we don’t talk enough about the BlueXP product set and features that are free, like if you want to do classification or the basic observability type things, those are all built in and we should have every customer using those.

Justin Parisi: Yeah, especially since it’s free. It doesn’t cost you anything for performance either. Cause it’s not super invasive when it’s doing this stuff. There’s that initial hit where it indexes everything. But after that, it’s just changes. So Scotty B’s at the airport still. I’m guessing you’re probably getting on a plane pretty soon here, unless you just live at the airport.

Scott Gelb: We’re boarding in about 15 minutes. I’ll get running here in a sec.

Justin Parisi: Alright, cool. So before Scotty leaves, we want him to talk about the ASAs, the all SAN arrays.

Tell us about that. A while back, I remember them getting announced and I was like, man, why would anyone want an array with just LUNs? Why don’t you want all the good stuff? So Scotty, why do people want that?

Scott Gelb: Well, NetApp’s done a great job, and we have to give kudos to a company that’s multi billion a year and keeps growing, and yet they still have engineering that listens to us.

From the partner and customer advisory boards, I remember probably two years ago, we sat down with Sandeep and said, we love all the nerd knobs. Of course, all of us love ONTAP and all the features. However, our competitors have made things easier in some ways, where, why doesn’t the box just give us one?

Let me go in and just say, I want a LUN of this size and be done. And sure enough, at Insight, they demoed the latest ASA, and there’s a screen that you just create LUNs. NetApp listens to us, so I have to be grateful for the continued capabilities where they make it easier for us to sell and integrate NetApp.

The new ASA is, as we know, same ONTAP, just a lot easier. Everything, of course, is active active, that helps us compete. But even more so, we can go more than two nodes, like a lot of our competitors. And another great thing, in addition to SnapMirror active, being synchronous across all sites now.

Is that sometimes we forget about SnapCenter, where a lot of our customers are looking at competitive products, and they need a backup product, where with ONTAP One licensing, hey, your SQL, your Oracle, anything you’re running, Database wise, that’s all included as far as the backup software too.

So, we have a really good tool bag with NetApp for ASA as well, and thanks to NetApp listening, it’s now much easier to implement.

Justin Parisi: So, I’ll open this up to the rest of the guys. How many of you are implementing, selling, using all SAN arrays? How prevalent is it out there?

Ryan Beaty: I’ve sold a few.

It’s like what Scott was saying, which is, well, I don’t need all this other stuff. I just want some LUNs. For the most part, it’s going to be your, typical FAS systems, like the A series, not the ASA.

But there’s use cases out there for them. So I’ve sold some. I don’t know about the other guy.

Pascal de Wild: Yeah, I was going to say, I sell quite a few of them for our customers that have separate departments running the block storage and running the unified storage. And by giving them block only devices, they can keep their own department, keep it in their own part, and not be integrated with the unified stuff.

Ryan Beaty: That’s a really good point because I’ve seen that too, because it goes back to, Oh, well, we don’t handle the networking. So if I get a FAS, I got to talk to the network guys, where if you just get an ASA, like, no, no, no, no. That is my network too, because it’s the fabric. So it does help selling it that way.

Justin Parisi: So Oliver, with the all SAN arrays, are you implementing those at all? And if so, are they being implemented with a metric cluster? Are they able to do that?

Oliver Fuckner: We are the IP storage guys, we use NFS and CIFS, and the SAN guys, the Fiber Channel guys, are a different department, so we have no interference with them. So, maybe we should tell them more about it.

Justin Parisi: Do you have a rivalry with the SAN guys? Like, do you have occasional battles? No, it’s too bad. I would, I would love to see that. SAN versus NAS on pay per view.

Scott Gelb: SAN is just NAS backwards. We’re all friends.

Justin Parisi: SAN is pretty backwards, guys.

I’m just kidding.

Ryan Beaty: It’s going to be the fight preview before the Tyson.

Justin Parisi: It is. Tyson Jake Paul. But before that, the undercard, SAN versus NAS.

All right. That’s the all SAN array information. Finally, let’s cover what NetApp talked about with StorageGRID. So, Oliver, you’re our StorageGRID guy. Oliver’s everything.

Oliver Fuckner: Yes we do a lot of StorageGRID. This is also where we get a lot of data from the Sun guys because the SAN storage is very expensive and we natively offload everything into S3 nowadays and we are very, very good at that.

We’re very grateful that we can afford the SGF series with not 60 terabytes, but only 30 terabytes. But yeah, it’s very, very cool and saves us lots of rack space and we don’t have those heavy racks. It saves us lots of issues we had with the older 6060 arrays.

Justin Parisi: So, what are you doing with your StorageGrid? Are you using it for a Fabric Pool target, or are you using it for actual object hosting?

Oliver Fuckner: We started with Fabric Pool, but nowadays, everything else, all the native use cases, log files and stuff, is a much, much, much higher demand and we have much more capacity there.

It’s about two petabyte versus five to six, I think. So FabricPool is what it all started, but nowadays everything else is much bigger.

Justin Parisi: So they basically said, Oh, we have an S3 server out there now. We can, Oh, let’s use that.

Oliver Fuckner: We had a lot of internal customers who said, Oh, we don’t have to use MinIO anymore.

Come on, let’s try it. And

Scott Gelb: yeah, It’s growing. Hey, and the QLC announcement for StorageGRID has really helped us quite a bit on some of our media entertainment customers. Also, a lot of the backup appliances, we’re doing native cloud out from those to StorageGRID. So, in addition to FabricPool, we’re seeing a lot more use cases with our customers, including customers that don’t have ONTAP who are selling Grim2.

Ryan Beaty: Yeah, I think it becomes one of those things where it started out as FabricPool and then it’s like, what else can we use this thing for? Yeah, and backup was probably one of the first things that got used for, I think getting rid of all the people’s tapes, cheaper backup storage.

Justin Parisi: Yeah, it was interesting with StorageGRID because it was always a good S3 server, but You didn’t have the name recognition with it.

And then once we started really pushing it with FabricPool and, you know, telling people, Hey, you can use this without having to pay extra money. And they’re like, Oh yeah, let’s do that. Let’s do that. Now they’re realizing how good the storage systems are and it’s becoming a more widely known object storage solution out there.

I believe they just landed on the Gartner Matrix thing with this recently.

Oliver Fuckner: That’s exactly the journey that we’ve been experiencing. Now we’re hooked. Yes.

Justin Parisi: That’s good. First taste is free.

Ryan Beaty: the update that I thought was interesting was the 60 terabyte drives that are now available for it. We pack a lot of space in a small area. And then the buckets, the tenant buckets have increased a lot to 5, 000 per tenant. So we had some issues with some buckets, should be gone now.

Pascal de Wild: And the metadata only nodes now being really available. Right. That’s also a thing in service provider world.

Justin Parisi: How many of you are seeing storage grid deployed as one of those global namespace type of implementations? Right. So, unlike a nas, you could tie a bunch of storage grid nodes together across the world.

And it becomes replicated across different nodes and you basically have a local copy of all this data across different sites. How many of you are seeing that as the use case for StorageGRID as opposed to some of the other ones we’ve mentioned?

Ryan Beaty: I think that’s all we sell.

Pascal de Wild: Yeah, we sell it within the Netherlands.

So we’ve got three different corners of the Netherlands using it and they’ll still have local access.

Justin Parisi: But the Netherlands isn’t that big. It’s very flat, so all the signals can travel.

Pascal de Wild: It is very flat.

Justin Parisi: It can travel easily across the wires.

Scott Gelb: I have a customer where they have a UK presence and then East Coast and West Coast US for three site geo dispersed.

Justin Parisi: Is this a media entertainment?

Scott Gelb: Correct, yeah. But the beauty of it is, is that the ILM policies are so rich that we can keep a lot of local copies for some workloads like the backup appliances they have. Those don’t EC, but then other workloads do erasure code across sites. So any mix and match that they want, they can on the fly just change the ILM policies.

I have yet to see anything as flexible or rich in the aisle. And Petri set as storage grid.

Justin Parisi: And I’m guessing this is media assets like images and artifacts.

Scott Gelb: Correct. Yeah. And that’s really, I think the sweet spot for Object is, is when you start to deal with images, and I think you’ll start to see things like medical imaging and AI learning models lean more and more into that.

Ryan Beaty: Oh yeah. Mm-Hmm. Yeah. One thing that I was surprised to hear about some of the speakers at Insight was. They’re using S3 and SMB for their machine learning and AI. Oh yeah, it’s like crazy because, you know, everyone’s like, you got to have 400 gig, but no, you can just use SMB.

Justin Parisi: You said SMB as well?

Ryan Beaty: Yeah. Yeah. One of the speakers there was I can’t remember the guy’s name, but, he works at a medical pharmaceutical research company. And when they asked him, obviously he couldn’t give out the sauce, but when he was asked what he was running, he was like, yeah.

SMB, and everyone just kind of gasped and they’re like, what?

Justin Parisi: It’s come a long way. I mean, people, people laugh, but you know, Windows is, is really made strides because they haven’t just, you know, improved the SMB stack, but they’re also adopting and embracing things like containers and open source. I saw the other day they have, they’ve actually got documented standards now for Windows.

stuff. They used to not do that. They used to not tell you what was the standards for Windows. Now, they have it out there. So, they’ve really opened up the bucket here, and they’re starting to show what they do, and I think that’s gaining more and more traction when they’re starting to deal with these types of workloads.

So I know that Insight, we announced another exciting new thing, and this is our disaggregated storage. And this is something I know has been mentioned and rumored for probably as long as I’ve been at NetApp. It’s been Hey, when are we getting this? When are we getting this?

When are we getting this? So I guess it’s almost here. So tell me about this announcement.

Scott Gelb: Yeah, basically what NetApp announced was the separating of controller compute from storage back end, so you can scale separately, so all disks, so SSDs in the back end are seen by all compute on the front end, so it’s a, it’s a big lift and a big change using the same ONTAP and Waffle that we know and love.

Justin Parisi: This is our shared everything architecture,

Scott Gelb: Exactly. Yeah.

Justin Parisi: So, do they use the term HCI anymore? No. Does that exist? No. Okay. So, it’s not HCI. So, continue. Continue.

Pascal de Wild: Well, what this does, opens the door for what Ryan mentioned earlier, to have AI right next to your data. So, With this aggregated storage, we’ll be able to just plug in data nodes and scan within our own network, not transporting the data anywhere.

And then connecting all the sites together, you’ll have your metadata and your vector databases right next to the data and be able to query them everywhere.

Ryan Beaty: Yeah, it’s almost like a Symmetrix kind of evil word, but it doesn’t exist anymore. On a smaller scale of what Pascal was saying, I look at it as the one case.

You’ve got two separate controllers, but they’re still, the shelves are still bound by those controllers, right? So even though they’re split, maybe in different racks, at the end of the day, they’re still owned by one. And now with disaggregated, they’re not owned by anything. So as long as a controller can see that shelf, you’re good to go. It’s not owned by anybody, like you said, it’s shared everything.

Scott Gelb: I think NetApp Marketing missed a name they could use, like, dude, where’s my data?

Justin Parisi: I don’t know if you want to ask that question, right? Because like, you want your customers to know where their data is. You know, we’re the intelligent data company.

We’re not, we’re not the dude, where’s my data company.

But yeah you keep workshopping that, Scott. I mean,

Scott Gelb: We had S3 where it’s the valet ticket. You don’t know where it is, the car goes, you just know you have the ticket for it. Right.

Justin Parisi: It’s true. So the shared everything aspect of this is going to entail a pretty major architecture change in general.

I don’t know how much they revealed at Insight, so I’m not going to go too deep into this with what I know. But it is going to be pretty big. I think it’s going to be on the same scale as like an all SAN array where we have this need for a certain use case. And the use case will initially start with AI, right?

Because that’s the hot topic. But I do see this potentially having broader use cases, broader implications for other data sets.

All right well, I think we’ve covered a majority of these announcements here with our NetApp A Team members, we’ve mentioned your contact information at the top here. We’ll also include it in the blog so that people can reach out. We’ll also include information about the A Team if you’re listening and you want to maybe investigate joining up and being a part of that.

So again, thank you everybody for joining us. Scott, Richard, Pascal, Oliver, and Ryan. Thanks everybody for joining us and talking to us all about the new NetApp announcements.

Alright, that music tells me it’s time to go. If you’d like to get in touch with us, send us an email to podcast@netapp.com or send us a tweet at NetApp. As always, if you’d like to subscribe, find us on iTunes, Spotify, Google Play. iHeartRadio, SoundCloud, Stitcher, or via techontappodcast. com. If you liked the show today, leave us a review.

On behalf of the entire Tech ONTAP Podcast team, I’d like to thank the NetApp A Team for joining us today. As always, thanks for listening.

Intro/Outro: [Outro]

We will be happy to hear your thoughts

Leave a reply

Som2ny Network
Logo
Register New Account
Compare items
  • Total (0)
Compare
0
Shopping cart