Hello and Welcome to Data Driven.
In this episode, Frank and Andy speak with researcher Matteo Interlandi about project Hummingbird.
Listen below or at the episode page on the Data Driven website.
Hello and welcome to dated driven.
In this episode, Frank and Andy speak with researcher Matteo Interlandi about project Hummingbird.
Now on with the show.
Second, hello and welcome to data driven.
The podcast where we explore the emerging fields of data science, machine learning and artificial intelligence.
If you’d like to think of data as the new oil, then you can consider us.
Car Talk because we focus on where the rubber meets the virtual road and with me on this epic Rd.
We’re on the information superhighway as oh is Andy Leonard.
How you doing Andy?
I’m well Frank, how are?
You I’m doing alright. We’re recording this on Wednesday, September 1st, 2021 and the the.
The the remnants of Hurricane Ida are ripping through the DC area.
Uh, so if, uh, if I suddenly get dropped, that’s because we probably lost power.
But I do have the backup generator, the one that the professionals installed and my.
Duct taped together a solar generator so.
I will be offline.
For a short.
Bit and hopefully come back online.
How how you doing, Eddie.
I’m doing alright Frank. Well, we are you know I’m about gosh 250 miles South of UM we didn’t get near the near the effects of Hurricane Ida as you did.
We’re getting a little bit of rain now.
We’ve had some wind.
Gusts, but it’s been really mild, and if you look on the radar.
Gotta watch it into track and I I do.
I’m a weather weenie and amateur but it it just kind of went around us to the to the West and it actually started the east when it got a little north of us and aimed right for your house.
I was looking outside that’s where Frank lived, right?
And look, the eye is coming right for.
Frank what’s left?
Well, fortunately we’re safe.
There was some kind of flooding in Rockville and the small overnight, and some folks they got up.
No one, nobody died that I’m.
Aware of so.
It it says.
You know we’re not.
Custom the floods or hurricanes or tornadoes up here in DC and and we’re more used to the human threats of, you know, little things like terrorism and things.
Like that, but.
Yeah yeah, you guys got a little bit more to worry about that than we do here in FarmVille, right?
But you know these days.
The, uh, definitely our thoughts and prayers are with the folks in in Louisiana and Mississippi.
They were hit very hard.
I’ve got got friends in Georgia, Western Georgia were telling me that.
They they took a beating as well and you know it just it looks horrible I.
I you know, I’ve I’ve been in a few of those places after hurricanes have hit as part of like church efforts to help clean up and stabilize and stuff like that.
It looks like I don’t know.
They people describe it as like a war.
I’ve never been in a war so I don’t know.
I’ve seen pictures and.
There’s a lot.
It looks like a lot of stuff is blowing over, and that sort of.
Stuff, it’s just.
So, and they’re talking weeks and weeks before power comes back on.
That’s horrible, that’s.
Similar places, yeah.
Probably going to be do more damage from for a lot of things.
Were you worried?
But on a.
More positive note, uh, a positive note.
Yes, on a positive note.
Uh, we are.
I am super excited to have a special guest and I say super excited because he’s from Microsoft.
He’s a senior scientist in Jelt at Microsoft, working on scalable machine learning systems.
Before he was at Microsoft, he was a postdoc scholar at the Computer Science department at UCLA, and this he was doing a lot of interesting stuff there.
He was doing research at Qatar or Qatar.
I’m not sure how to say that exactly, but he has a PhD in computer science.
Of Modena and or?
I’m going to botch this.
Welcome to the show, Mateo.
Awesome, so we are really excited to have you here.
We actually booked you a whole month in advance.
I’ve been looking forward to this.
Yeah, because you’re coming by way of some of the folks at the Mlad conference.
And for those who don’t know, I’m a I’ve mentioned this.
Mlad stands for machine learning and data science summit.
It used to be in person I think now it’s entirely virtual for the foreseeable future.
Uh, but that why I attended M lads in 2016 summer of 2016 and it was uh, it was life altering like I don’t say that.
So Microsoft does amazing work in the machine learning and data science space.
Very much cutting edge stuff very much I.
I wouldn’t say under the radar, but Microsoft does not do a great job putting its own horn, so we’re very excited for you to come on Mateo and talk about this little project that you’re working on.
And what is the is it have a code name or what?
What is it called?
Hummingbird should the code name is actually I’m in.
Don’t have any specific internal names for.
This for this.
OK, what what is GL stand for?
That was my that was my first question.
When I saw your bio.
Uh is for Gray system lamp and is the after Jim Gray which.
Is putting award yeah?
So these are the search lab after this name yeah and use within the Azure data organization.
And uhm, So what?
What what cool stuff does Hummingbird do?
So, Hummingbird, uh?
Is a little bit, uh, weird project in the sense that when we started this project we didn’t know if it was going to.
To be a success or not?
Because what we try to do basically is to uhm translate traditional machine learning models and into neural networks.
Actually not Internet format into tensor programs such that then we can run over tensor runtime, such as pipers.
In terms of.
Uhm, so when we started this project actually idea was hey there is a lot of investment in general pulling into this neural network frameworks and.
Coming from the Azure data organization, instead, we are more interested in these traditional machine learning methods such as decision trees.
Linear models were not encoding all those boring traditional algorithms.
And so we look at this.
The neural network system and say hey how we can take advantage of all this technology that is built.
Into this domain so you can run neural.
Network over CPU.
Over the GPU, then you can use like fancy compilers to compile to generate the transfer programs.
All those sort of techniques and we were.
Kind of struggling.
To see what we could do with the with this stack and and what we come up with with is this Amber project.
So we basically take a.
Traditional machine learning pipelines composed right feature iser and machine learning models.
After the day trained.
So first you need to train it using cycle ornamental net or.
Uhm, uhm, one of those traditional machine learning platforms and then once it is trained we basically convert it into a set of tensor operations in.
In the current version we use basically PY torch for doing this conversion and then basically you have a pipeline model so you can do whatever you can do with Python.
Models so you can deploy it in in it into a PY torch.
Uhm, deployments you can run over CPU ran over the GPU or you can do the torch script if you want to get rid of all the Python dependency and just have a C++ program you can.
Do all those all those tricks.
Interesting, does it impact accuracy precision?
Does it improve it?
Keep it the same.
We tried to keep it the same so we are able to keep.
It The same up to floating point numbers roundings?
So since we use, you know we use PY torch to run these programs and not like a socket or ornamental net.
There are some differences in how they do you know, floating point operations.
Accuracy is up to roundings in the Floating Points, which sometimes are actually.
It can be quite a bit, but most of the time is really small, almost not noticeable.
Interesting, interesting, uhm.
Do you would you know.
If there was like.
A discrepancy, or you Dutch as part of testing?
It’s part of testing.
Right, all software is tested, right Andy?
So we have we have.
Sometimes intentionally is that the email.
And he has a saying where all softwares I I forget exactly what it is.
But what is it?
Yeah, all software is tested, some intentionally.
There you go.
Uhm, so what’s the?
What’s the real?
What are the advantages of of of converting kind of a traditional model over to a tensor model?
Is it portability?
Is it speed?
You did mention that you can run it on.
You could take advantage of GPU as well as CPU.
Yes, exactly so you most mostly is related to speed, so you can basically run your socket, learn model on GPU end to end and and this user provides you know a little bit of quite a bit of speed up we for some of our example we even saw like 2 ordinal Magneto speedups.
For some of the models.
And uhm, and usually we try to show that.
If you use GPU.
Can be much faster, but on CPU we try to be kind of as close as possible scikit learn or the base or the base or diminished model.
Sometimes we can, sometimes we are a little bit slower.
Uh, but we.
We had some really interesting result.
Like for instance, we did some experiment with some.
Some folks at the VM and we took some extra boost model and we compiled some training accuracy boost model.
Uh, using Hummingbird anti VM into some uh, we basically do code generation and we show that the that model that was compiled to Python was even faster than they quoted the C++ implementation that they’re having next used, but those CPU and GPU. Yeah, there was kind of OK. What’s going on?
This is not.
This was not expected.
Wait, did you say it was faster than a C++ implementation?
Yes, I mean if she used.
Underneath C++ even scikit learn.
You know they use like.
From C++ library and yeah, using TVM for doing the code generation, they are able to do like a operator fusion which you don’t normally have for like these traditional models.
So we told these tricks bigger, basically that are coming from the neural network.
Famous we were able to get like this.
These surprising numbers.
Interesting, so that’s a real performance boost, and probably if you scale that up into the cloud that probably.
Means a lot of money saving too in terms of on cloud computing things like, I imagine a company like the size of Microsoft would be very interested in getting better results faster with less cloud compute.
You did mention an acronym, I just wanna make sure folks know.
What that is?
Tyvm what is that?
Uh, I don’t know what is exactly for, uh, some tensor maybe?
Andy looks like he knows, but he’s on mute.
I don’t, yeah I I don’t know.
OK, I’m just curious.
I’ll go look it up.
There you go.
I think is for tensor virtual machine, but I’m.
Not sure if this is approach.
That sounds about right.
Tector, yeah tencer.
Ah, I see.
So thanks very much comes up, that’s interesting.
Well, we’ll we’ll figure out what it is putting.
Put tensor in here at TTVM you said.
This junction so.
Yes, is a project is a GitHub project, but I think it also is Apache project and these are our top where you have.
Yeah there TV m.apache.org yeah.
And it doesn’t tell me what it stands for, but that’s that’s where you can go and learn more about it.
It’s according to the website and end to end machine learning compiler framework for CPU, GPU’s and accelerators.
Interesting, it does sound interesting, yeah.
That’s what’s great about this space.
There’s so much you could geek out on and spend like.
Like I’m just looking through, I found some, uh, a web, an article on machine learning, knowledge dot AI about Hummingbird and it’s just like wow.
They basically it looks like they copied and pasted the fake.
It’s intelligent, but it does look fascinating in terms of what it can.
Do so so.
What what motivated what motivated the creation of Hummingbird?
So the motivation was actually different, so the so the initial motivation was actually tried to.
Uh, not to accelerate.
The trischen machining pipelines, but to use differentiation.
Uhm, basically all this, uh, backpropagation.
All these tools that are using for training over neuron actors and try to translate them over traditional machine learning models.
So try to do basically backpropagation over scikit learn pipelines.
And that is the biggest tool.
So we started with this tool that basically was translating this tradition machine pipelines.
This second only pipelines at the beginning are into Pytorch such that we can do end to end differentiation.
But then once.
We have we were at.
Point and of course, as you can imagine, we were trying to do end to end differentiation for increasing increasing accuracy of the pipeline to see whether if you use backpropagation you can increase accuracy.
And then once we did this translation, we basically realized that OK, since we are on Python sword, we can exploit all these other, uh, you know the Python framework and hardware acceleration on those other two rings.
And then basically we kind of ditch this idea of doing end to end differentiation and running by propagation over over the pipelines and instead we focus more.
Going to be linear system for accelerating inference prediction over distillation, machine learning.
So I’m curious, Mateo.
This is not my fortune Franks, the data scientists of our pair.
Here I am a data engineer, so can you give me an example of a problem that I I get the speed part of this, I really do.
I we need that in data engineering too.
I think everyone needs needs that performance part, but can you give me an example of something that you’ve applied this to?
And you already gave us a, you know, a interesting number about how much faster it was.
A couple of good references from that.
Was there something in particular that you’ve worked on or that your team has worked on and applied this and saw some you know some interesting results?
So I mean first of all, I’m a database person too.
I’m not a machine learning, so another I think would be speaking the same language.
I’m a I’m a database person that.
I’m trying to basically understand all the machine learning domain and see how much that amazing can take advantage of these techniques.
And my needs help.
Uh, I mean the the start of my investigation was traditional method because those are the ones that.
You in general.
Use or tabular data, that is the one that we have.
At the most.
Dumb and so related to use cases.
Let me think so we.
Uhm, so we try to use it internally for some of our first party customer.
Uhm, to just because they have like cyclotron models.
And they want to kind.
Of try to see if they can speed up the the inference of this.
The prediction over these models.
Uhm, when someone reaching out from outside, uh, mostly with kind of try to accelerate like a 33 based algorithm such as gradient boosting light GBM, extra boost those those.
Teams and yeah.
Yeah, in general the use case are really.
Simple is you know you have a secretary models and you want to deploy your your your secretary models.
Uh, and when you deploy you want to take advantage of GPU.
You did because you already have some GPU deployments, so you already have some neural network.
Uh, there and uh you also want to take advantage of the GPU that you are in your deployment by with this.
Yeah, traditional models or just because you have like a a traditional model, you want to increase the the inference time.
I have to say that the most of the performance boost we usually see is related to batch inference, so not when you’re doing one single one single point inference, but when you have like a batch of records that we can basically saturate the performance of a GPU of a GPU order for instance.
So just to follow up on that, then it sounds like a lot of what you’re doing is.
You know you’re focused on the on the tool that does these translations for you into other platforms.
Other technologies allows you to use you know GPU versus CPU, and I think what you’re creating if I understand you and I didn’t do my homework, apologies.
I think what you’re building is away.
To to to exactly what we were joking about earlier about testing.
You want to see how can I get the peak performance?
For you know this part of of that.
Maybe this module or this operation of the batch and maybe the answer here and you mentioned this may be the answer here.
Is CPUs or GPUs? Maybe it’s C++ and you’re just able to, you know, kind of pick the high spots and say I’m getting order.
Case of performance.
The low spots right?
Just stuff that runs it fast.
And then you can put that together and hand it back to your client or someone who’s interested in it and say right now, given the volume and the data and the state of hardware, you can get the maximum performance.
If you do this part here and that part there, that part there is that fair.
So you’re you’re actually looking into the some future work that we are investigating now so kind.
Of is matching.
The different for the different part of the pipeline.
So what we focus actually right now is try to translate the machine learning models end to end, so taking the featurization’s and all the models and.
Then because basically we saw that that is the the where we can get most of the time, that is where we can get to the mass, the mass maximum performance because by looking at the model end to end we can run it completely over the GPU instead of having to go back and forth from GPU to CPU for example.
But what you point out is something that we are considering.
So kind of look at the model, not as a kind of, you know, a unique.
The black box kind of a artifact, but is something that we can actually split in different parts and eventually we can run it in over different over different hardware over different runtime.
I’m such such TV.
As I said before, so some particle on TV and some parts random Pytorch the the sort of those sort.
Of things so kind of like a meta optimizer.
OK, it’s a combination.
Like that’s exactly where I was going.
Yeah, it’s like you’re tuning stored Procs Mateo.
And you’re deciding I want this one to run on SQL Server.
I want that one to go to Postgres.
And yeah, it’s just that that is interesting that you can span hardware and software.
You can pick platforms in the software.
To do it.
And I I’m with you.
I got my head around us now and I I think that’s really really cool I the this just sounds like something that’s going to accelerate the field really.
Because if you the last time you’re sitting around twiddling your thumbs waiting for a result, you know the more you can get done.
I mean, that’s just.
Common sense, so I love what you guys are doing.
Yeah, yeah exactly.
That’s that’s really cool and I like that.
I don’t think I’ve ever heard anybody talk about.
You know, changing libraries and changing you know hardware platforms even.
I mean it’s it’s hard to even say I don’t know what you’d even classify that as because running different chips you know, running the processes on different chipsets.
That’s something we used to do back.
In the seventh, you know.
I mean, but it was.
Let’s just say that Harkins back to like the.
Mainframe days it kind of does. I mean 68 hundreds and his the 80s and all of that and but?
I mean, this is way, way, way more advanced than all that, but I like the idea.
I like being able to to do that and I hear what you’re saying right now.
You’re just after picking a platform, picking on an approach and saying, you know we’re going to run this.
We’re going to generate C++. It’s going to run on CPU’s, and that’s overall that’s going to be your fastest result. It’s going to give you your best performance.
I I get you.
But that I I didn’t realize I jumped ahead there.
But that happens sometimes rare, but it happens.
Y’all could totally take that idea Mateo and run with.
Yeah, if you.
You can run right the paper together if you want to.
There you go.
You know, right?
Away I could.
I could do the punctuation.
He’s really good at.
Reviewing stuff, I will say that his personal experience from him him reviewing my articles in the now defunct MSDN magazine.
Here we go.
I remember that those were fun.
I learned a lot reviewing your articles.
Frank ’cause you were always on the cutting edge.
Yeah, neat stuff what?
But this this Hummingbird stuff looks really cool and it looks like it’s as easy to install as PIP install Hummingbird.
Just be missing.
Hummingbird, Dash MLI think it is.
Yes, yeah, that number was already taken off course.
Well, yeah, but no.
This is really cool.
Like I I I like where this is going.
I like the potential for it.
’cause you with the cloud you know you.
You think about.
Database as a.
Service like you don’t.
You know you don’t care what the heart women you care but I mean like from the end developers point of view.
They won’t necessarily care what type of hardware like that.
This does open.
Up some very interesting possibilities, just just kind of piggybacking on kind of what Andy said.
It’s like, wow, I mean one of the things and I forget who said it?
Might have been Kevin Hazzard, who said that you know now we live in an age where we’re not dealing with just spinning platters.
We can imagine.
What database time butchering what he said?
But he he did say he says a lot of profound things and one of the most profound things he said was something like you know what?
What would a database in the future look like?
Because we’re not.
Dealing with spinning platters is that did.
I get that right Andy or something along those lines.
You did he. He blogged about it out devattorney.com. We’ll have to look that up with the show news, but Kevin is one of those.
He’s a pretty pretty, profound thinker, and
I was going to say, uh, she’s a very deep thinker like he’s always like 10 moves ahead.
Yeah, I could tell.
Yeah, and I could tell reading the article ’cause I’ve known him for it.
Sort of you.
We’ve known him for a decade or more and he was struggling with trying to articulate the concept.
And if it’s tripping someone like Kevin Hazzard up, it’s pretty powerful console.
But he did a good job in devjourney.com. He’s not blogging as much ’cause he’s just too stinking busy. But yeah, you’re right. It. And I had a similar conversation.
With you know with with my son Stevie Ray not too long ago we were talking about.
You know flash drives, and you know that the memory that we have now is so much faster than the platters and I I made this comment to him and I kind of stopped and thought I don’t know if that’s accurate or not and maybe Mateo since you’re here working on a cutting edge, you can help us.
We were just poking around thinking about operating systems.
And we do a lot are here at the House in FarmVille, VA with IoT.
In fact, he’s building a new collection of sensors for me right now for nor do we know.
So we’re going to hook it to a π, because Pi’s can talk to, you know, to the Internet they can talk to our router, and that’s the next big secret. Don’t tell anybody.
It’s the one of the neat things about these Pi architectures versus even really powerful service that we have right now is both.
You can compare them.
They’re both messaging systems, they’re they’re just passing around messages physically on a bus.
When you get to that Pi level, and that’s how I learned it, so I’m really excited about him learning.
That way, but.
Nobody thought about because we didn’t.
We couldn’t conceive of it when hard drives came out.
Nobody thought about building.
The OS or something.
Second, you know second generation or higher language on that without those spinning disk.
And here’s the here’s my long winded place.
I wanted to get to is I don’t know.
If we’re there now, even I imagine there’s probably some OS is out there that.
Or setting on GitHub, there’s probably 100 of them by now that people are exactly doing that. They’re taking advantage of the new IO if you will, but I don’t think the big systems are doing it. I don’t think the major popular operating systems are and for good reason. They’re stable, it’s.
It’s hard to change all of that.
Well, there’s a lot of inertia.
When you when you have a widely deployed operating system, you you get a lot of inertia and you know I’m not.
And I’m not talking about just Windows, I mean iOS.
I mean Android, I mean Linux like.
Once you have a wide install base, you you lose the.
Ability to be very experimental.
Yeah, I totally concur with them and I see.
I see the cloud, I see Azure.
I see the you know that this leap that’s happened and it’s just it’s crazy to try.
I don’t even keep up with it, but just reading tidbits, reading, editing Franks articles and the like, it’s just taking these quantum leaps.
It’s like 10 years worth of stuff happening every six months.
And you guys just keep knocking it out, and I imagine at some you know at the Gray Systems lab that you’re surrounded by people who are just, you know, in Star Trek land or something.
Happy yeah yeah.
Yeah I totally agree on every.
All the things that you said.
Like I I was presenting a project related to Hummingbird.
Actually kind of like a few days ago and I was preparing my.
And I and I.
Come up with this slide, I think.
It was from just.
Doing a few years back and.
It basically was showing the number.
Of papers that.
Were published on machine learning or the public on archive and in in 2018 they were published 100 paper a day just to machine learning on that kind of just.
Just to give an idea on how fast is now, the pace in which innovation is coming up, especially when the machine learning neural network domain is just.
On on operating system database domain is a little bit slower, I would say because a Frank said that there is an answer there because this system are deployed and if you want to add even new hardware it will takes it takes forever.
So I say Microsoft what happens when you have like a new outdoor community and you want to exploit it?
It just sticks.
And this is just because you know they’re used by many people, and even if you want to do a small change here, sweetheart.
And I’m seeing the articles about Windows 11 where when you try to make a change like that and say hey you need this minimum hardware.
Now everybody is going.
Oh yeah, yeah, everybody got the pitchforks out and like freaking out and like, yeah, I mean I, I remember I was at I was at Microsoft doing evangelism on the shift to Windows 8.
Just you would not believe this.
Well maybe you would, I don’t know.
But like just the the horror and people faces when they got rid of the start button like it was just like it was like the end of the world like you were you were killing somebody grandma.
Like you know it’s just.
Like it was, just like I mean, I disagree with the decision that was made, but but let’s let’s put it in perspective.
But, uh, but yeah, I mean.
You could still get there.
You can still start.
Things, but you could.
Still start things like in and and before.
This is funny like this is this is just a complete sidetrack in material.
We do this a lot.
’cause it never happens. Mateo.
Before keyboards had the Windows Key, there’s a you can hit control escape and it pulls up the same thing like.
Like I don’t know like it’s just.
Not the end.
Of the world anyway, sorry it flashed back to 2012, but so Mateo.
We have a bunch of kind of pre canned questions we’re going to ask you.
We ask this from all of our guests.
Most of them are about half of them, or kind of fill in the blanks, but the first one is how did you find?
Your way into data.
Did you find data or did data find you?
Uh, I would say data finally.
I think it was mostly because when I started my PhD, I wanted to do distributed systems.
And for some reason I end up doing distributed system in a lab in a database lab.
So I think that is why I think the data found me because I want I wanted to do something else.
But then I end up doing data that probably was.
I was really lucky to be honest.
Cool, very cool.
So our second question is what’s the favorite part?
Your favorite part of your current job?
Uh, no, this is.
A hard question.
Uh, I will say that I really love my management in the sense that they allow me us in general to be.
We sort of independent in the sense that you know we are researcher and they allow us.
They they find a way to.
Kind of strike.
A balance between having us be independent and kind of do our own research with crazy ideas like the one that.
I presented with Hummingbird.
And still be kind of, you know.
With our foot on the ground and and kind of helping product improve improve.
The system etc.
So I think that is mostly what I love, so I on one I I can kind of look in what we.
Can do next.
Like having the operators running over different target and on the other I can kind of see what are the real problems that are coming from from from product and how we.
Them and I love this to be honest and I love this.
Awesome, our first complete this sentence when I’m not working I enjoy blank.
I would say work but they will not.
Yeah, I don’t know.
Maybe family at this point, maybe family spending a lot of time in family with the commute time.
We are often at home and I have a two years old that is driving us nuts.
That’s pretty cool.
So we have.
My youngest did zoom kindergarten over zoom and it’s just as chaotic as it sounds.
Almost put it that way.
Yeah, I cannot imagine I mean to be honest.
Now he’s in daycare and we are really happy that now is in daycare because I’m, you know, at that age.
But I guess that every kid needs to have interaction with.
The with other.
Kids and just stay at home is not, is not is not healthy, but I can’t imagine how.
Hard it is to.
Have like one year at home and.
Having class or two courses.
Yeah, I agree.
Go ahead, I’m sorry.
Joe said, I hope that this all.
This situation will end soon.
Me too yeah.
It means it doesn’t like you, but.
Yeah, same here.
I think we all do the uh, I think it’s going to be one of those things where we look back for decades probably, and see these little things that we’re really not noticing right now.
We’re just coping and managing and going on that.
You know, we’re gonna look back and go.
Wow, you know that changed this.
And that, and there’s all these things that come from it.
I, I hope, mostly good.
But I think it takes us time to figure out the good.
I I look forward to that time.
When we are.
Reflecting and reminiscing on stuff like this.
I I want to, but we have to be on.
The other side though.
Yes, our our second of three complete descendants is is, I think, the coolest thing in technology today is blink.
I mean, there’s other.
Search, usually I’m attracted by things that I don’t know.
Uh, so we’ll say something like quantum computing because I don’t know anything about quantum computing.
Yeah, I I don’t know.
So go to impactquantum.com.
I’m smiling because I was waiting for Frank.
I actually it’s funny because in the I.
Went to the last M lads that was held in person. It was fall 2019 and the second day keynote was a hardware keynote and you know I go to uh, data science conference.
I want our data science like I I was kind of mad that they had a hardware person up and but then she started talking about quantum and it was just blew.
My mind, and ever since then I I.
I’ve really wanted to, I really.
I was just so overly excited about, like quantum computing, but the thing about quantum computing is, you know that night at the hotel.
Like you know I installed the Q Sharp SDK and stuff like that and then I was like OK Now what?
Because it made no flippin sense.
So I’ve been kind of on this, you know, intermittently, this journey of kind of learning more about quantum computing, so starting the podcast on impact quantum and then starting kind of like the blog.
Have kind of forced me to keep at least the regular cadence of figuring out what’s going on there, so it’s it’s fascinating.
I will say the one thing I’ve learned is the importance of linear algebra.
Apparently, linear algebra and the way the algorithms work in quantum systems tend to explain each other very well so.
But yeah, so definitely a quad impact.
A blog I’ve I’ve started last week and regularly updating it, but that way.
But that’s you know, ending the shameless plug.
But I agree with you, I think quantum computing would be a very cool thing to explore for a number of reasons.
The the next and final completed sentence is I look forward to the day when I can use technology to blank.
He used technology and I cannot have to drive the car that is like censoring cars is something I live in Los Angeles, so for me it’s half dozen cars.
Kind of complete life change.
I totally agree, I I I used to enjoy driving like I used to.
I grew up.
I I didn’t have a license that was like 21 so like it was just like for me. I’ve done my time on mass transit.
I’ll put it that way, but like living in DC Everywhere is just bumper to bumper to do. Probably a lot like LA and it just really takes the joy out of it. And you know.
One of the things my last job.
At Microsoft I was at the MTC.
And the only thing I didn’t want to take that job was because I had to drive to Virginia.
Which despite it being 9 miles of the crow flies could take.
Could take 90.
Minutes to two hours, but as I don’t want to say as luck would have it, ’cause it certainly wasn’t lucky.
The pandemic kind of made it so I could work remotely and never had to do it.
But you know, I I I share your dream.
At day of the.
Of the driverless of the you know self driving cars so you can.
You can read you can you know be on the computer you can do work while you’re driving and things like.
That yeah, I’m I’m right there with you.
Yeah, I I totally agree.
With what you said.
I mean, I’m from I’m from Italy and now I’m from Montana, which is where.
Basically, we say we like a fast car and good food, so we have like Ferrari we have Ducati we have.
They rolled into over that so.
I was growing up with like hearing the Ferrari when they tried in.
The in the.
In the circuit AV in Chirag no.
Leave like I think 3.
Or 4 miles from Fiona is still like a year when they turned.
The engine on how?
Loud were was that so I really like cars but.
Yeah, I can not stand.
You know, I believe the traffic line with other cars just for like for instance for going to work or to for going grocery shops.
And it’s just kind of a waste of time.
Especially Ferrari, Ferrari is meant to go run free.
But that thing in Texas.
That’s right my my neighbor, a couple of my neighbors have.
Let her go.
Of one of my neighbors has a Ferrari and you can hear it go by. It sounds beautiful here go by so I totally relate somebody down the street owns a Jaguar V12.
And when that thing goes by, it’s like angels singing I.
I know it’s a British car and an Italian car, and that’s probably heresy.
But I will say it is sounds sounds impressive.
Uh, so so it sounds like.
You might also be a car guy.
Or at least used.
To be yeah.
So our next one is share something different about yourself, but a little caution.
It’s a family friendly podcast.
We want to keep that iTunes clean rating here, so don’t make us at it.
Yeah, I don’t know.
I mean I don’t know what about to share really.
I’m kind of spending all my time either I work with or with family, so I probably have the boring life ever.
Do you think that?
I I think it is good.
I mean I don’t know.
If it’s good, the fact that now we are.
Working from home.
I have kind of more time to.
Focus on other different things.
Like for instance, I could watch stops right before I couldn’t watch stocks, and while I was at work.
Uh, because I can drive my laptop and when I have a meeting I can just take a take.
A peek and of course I can strip my stock there.
Uh, while while I’m while I’m working.
Uh, and yeah, and like I think it kind of yeah, kind of like a uh.
Kind of looking at the stock market, especially because now is.
A little bit.
There’s a little bit of fraud around, so all these mem, stock, etc.
Is you make exciting, but there’s a little bit dangerous so.
It’s become like a sport and if you will.
Yeah, I mean I was trying this then.
Auto renewed app.
When they say gamification of stock market, I don’t know if you haven’t tried that is is crazy.
It looks like gambling at all.
It looks like.
And the final question, do you listen to audiobooks, and if so, do you have any recommendations?
No, I don’t listen to any books.
I think I’m more kind of on the old.
Style I would say I.
I prefer using it to read.
Uh, rather than listen.
You know, I.
Don’t know why.
I don’t know why.
I think it.
I think it depends on the person like.
I think it depends on kind of what you’re comfortable with.
I mean, my audiobook listening is nowhere near where it was when I would drive everywhere all the time.
So yeah, yeah. So the reason we asked him ’cause audible is a sponsor of the show and if you go to the data drivenbook.com you can sign.
Up for free.
Audible membership and if you sign up then they give us a a little pat on the back and probably enough money to buy a Starbucks.
Help support the show.
And they’ve actually been one of our number one.
Sponsors so far.
Because of this program so.
Yeah, so you mentioned you had a website where can folks find out more about you?
Who is my my website?
I think it is.
I I don’t remember.
Uh oh, into result is a GitHub website into result Dot GitHub dot IO.
All right, we’ll make sure it goes on the show.
Notes so folks can find out more about this and definitely go to your favorite command line prompt and type in PIP install Hummingbird Mel to check out what’s going on.
I’m definitely going to experiment with this.
’cause it does look fascinating and and like Andy said, the potential for this is fascinating.
Because this could end up in, this could end up in a lot of different places, ’cause it solves a lot of different problems.
So anything else would fail.
Yeah, if you try it let us know and we are kind of, you know, looking for contributors and feedbacks.
So if you try it let us know what do you think and how we can improve.
Awesome, thanks and I’ll add the nice British lady and the show.
Thanks for listening to data driven.
We know you’re busy and we appreciate you.
Listening to our podcast, but we have a favor to ask.
Please rate and review our podcast on iTunes, Amazon Music, Stitcher or wherever you subscribe to us.
You have subscribed to us, haven’t you having high ratings and reviews helps us improve the quality of our show and rank us more favorably with the search algorithms.
That means more people listen to us spreading the joy and can’t the world use a little more joy these days?
Now go do your part to make the world just a little better and be sure to rate and review the show.