WEBVTT

00:00.000 --> 00:07.880
Thank you for being here.

00:07.880 --> 00:15.000
We have Mark Andre Lemberg, who is giving a talk on Dr. B. Thank you Mark for the talk.

00:15.000 --> 00:18.000
Yes, thank you very much.

00:18.000 --> 00:25.000
It's me giving a talk instead of introducing others.

00:25.000 --> 00:29.480
I'm going to give a talk about Dr. B. How many of you know Dr. B or have heard of

00:29.480 --> 00:33.480
Dr. B before? I have some amazing, very good.

00:33.480 --> 00:37.480
By the way, I'm not affiliated with Dr. B or the project or the company.

00:37.480 --> 00:41.480
I just love the project that's why I'm giving this talk.

00:41.480 --> 00:47.480
A bit about myself. I'm Mark Lemberg. I've been around for ages. I don't even want to say how long.

00:47.480 --> 00:55.480
I am one of the core developers. I implemented the Unicode string implementation that you have in

00:55.480 --> 01:05.480
the company. I've been around for the different foundations for quite a lot.

01:05.480 --> 01:09.480
What's the motivation for the talk?

01:09.480 --> 01:15.480
You're sitting there. You're getting lots and lots of data from somewhere.

01:15.480 --> 01:21.480
Very different sources. You want to put everything into your database somehow.

01:21.480 --> 01:27.480
You need to somehow organize all your data that you put into your database.

01:27.480 --> 01:33.480
A good way to do this is what's called ETL. How many of you know what ETL is?

01:33.480 --> 01:39.480
Most of you, excellent. Just going to go quickly over this. ETL is basically the standard

01:39.480 --> 01:45.480
business term for taking data and putting it into a database or convert it from one form

01:45.480 --> 01:51.480
into another format. ETL stands for extract, transform and load.

01:51.480 --> 01:57.480
Where extract basically means you take the data from somewhere, then you transform it,

01:57.480 --> 02:01.480
and then you load it into your system. Nowadays, it's not done like this anymore.

02:01.480 --> 02:05.480
It's actually done in a different order. You first extract the data, then you load it into

02:05.480 --> 02:09.480
some database and then you do the transformations in some database in the

02:09.480 --> 02:13.480
or maybe the final database because that's a lot more efficient.

02:13.480 --> 02:17.480
Databases are simply very, very good at manipulating data.

02:17.480 --> 02:21.480
And then once you've done that, you can start processing.

02:21.480 --> 02:25.480
So what's stuck to me? Many of you already know what ductuby is for those who don't.

02:25.480 --> 02:31.480
Dactuby is kind of like similar to SQLite for transactional data loads,

02:31.480 --> 02:39.480
so over TP. But ductuby uses an approach that is more focused on analytics.

02:39.480 --> 02:45.480
So in general, the picture is that you already have lots and lots of data in your database,

02:45.480 --> 02:49.480
and you want to run fast analytics on them. For example, run reports on them,

02:49.480 --> 02:55.480
get the data out in certain ways, maybe transform the data in certain ways.

02:55.480 --> 02:59.480
So that's what ductuby is very good about. Also ductuby originated from a

02:59.480 --> 03:07.480
or let's say has an academic background to it. So there are lots of things in ductuby,

03:07.480 --> 03:13.480
which are really brand new, which are the latest in what's there in database technology.

03:13.480 --> 03:19.480
So you get really, really good technology and implementations to run your code.

03:19.480 --> 03:27.480
It's then like SQLite, it's a database that actually runs in your process.

03:27.480 --> 03:31.480
So there's nothing extra to install. You just do a pip install or UVat,

03:31.480 --> 03:35.480
and then you have your ductuby, and you can immediately start using it.

03:35.480 --> 03:41.480
So that's very nice. It's column based like most old app databases are nowadays.

03:41.480 --> 03:47.480
It uses Apache error for speed. Apache error is very much focused on not copying things,

03:47.480 --> 03:53.480
so that gives you a lot of speed. It has SQLite to standard language, and it uses the Postgres

03:53.480 --> 04:01.480
dialect of SQL. So it's very easy to get started with if you have a bit of knowledge of Postgres.

04:01.480 --> 04:07.480
If you don't, I can recommend just talk to your favorite AI, chat tool,

04:07.480 --> 04:13.480
and they are very, very good at writing the SQL for you.

04:13.480 --> 04:19.480
It's single write multiple readers, so it's not, you cannot write to the database for multiple processes.

04:19.480 --> 04:25.480
This is kind of like the effect that you have because it's like a database that runs in your process.

04:25.480 --> 04:31.480
So it's not a distributed database, and of course it has very, very nice Python support.

04:31.480 --> 04:37.480
So I'm going to go through typical steps that you do in ETL, first to generate ones,

04:37.480 --> 04:41.480
and then I'm going to focus more on the ductuby ones. So the first ones, of course,

04:41.480 --> 04:45.480
you need the data from somewhere. You read it from all the different sources that you can think of.

04:45.480 --> 04:55.480
I listed some sources here on the slide, typically what you get is you get data as CSD files or pocket files.

04:55.480 --> 05:01.480
You download them from somewhere, but you can also have data sources that are completely different.

05:01.480 --> 05:07.480
For example, you can read an RSS feed, or you can go to web page and scrape it and extract data from it.

05:07.480 --> 05:13.480
It doesn't really matter where the data is coming from. You just need to first get it onto your system somehow.

05:13.480 --> 05:19.480
And once you have it, then you can start preparing it for actually loading it into a database.

05:19.480 --> 05:25.480
So this is something that people sometimes forget. It does make a lot of sense to prepare your data load,

05:25.480 --> 05:35.480
because then you can actually use tooling that runs a lot faster than your standard time follow-up where you basically load the data one by one.

05:35.480 --> 05:45.480
And so what you need to be focused on is you need to make it so that the data is already in a format that the data base tools that you have can easily read.

05:45.480 --> 05:51.480
So for example, for ductuby, that would be reading CSD files, that would be reading parka files.

05:51.480 --> 05:59.480
It also was other formats, but these are the most common ones that you typically find. So you have to prepare everything, make it nice and clean.

05:59.480 --> 06:03.480
This typically requires going over to the complete data set that you have.

06:03.480 --> 06:12.480
And because you don't want to load everything into RAM, that usually doesn't work if you have a lot of data, you need to do that line by line.

06:12.480 --> 06:17.480
So you basically stream your data through your application and then prepare it.

06:17.480 --> 06:22.480
And there's a very nice package that can help you with this. How many of you know Polas?

06:23.480 --> 06:30.480
Few, if you don't, and you need to do this, then do have a look at this. Polas is very, very nice for processing data.

06:30.480 --> 06:35.480
And it can actually do quite a bit.

06:35.480 --> 06:41.480
But the thing is with Polas is it's it's geared towards working in memory mostly, right?

06:41.480 --> 06:47.480
It's fast when it does in memory stuff, whereas ductuby can actually then also go to disk.

06:47.480 --> 06:57.480
And then offload things to disk and make things run even if you have data sets that don't fit your memory.

06:57.480 --> 07:05.480
Polas is something to definitely have in mind here, but you can also do it just using a standard for loop that you have in Python.

07:05.480 --> 07:11.480
Now once you've prepared everything, you can then run a single command and get everything into your ductuby.

07:11.480 --> 07:17.480
For the CSE files, for example, that would be a single select and then the function read, underscore CSE.

07:17.480 --> 07:22.480
And that would load the complete data set into your database.

07:22.480 --> 07:28.480
You can basically use white products, for example, to have multiple files ready in one go.

07:28.480 --> 07:32.480
This is extremely fast.

07:32.480 --> 07:39.480
So ductuby, like many tools nowadays, well, no wrong.

07:39.480 --> 07:42.480
Most tools nowadays are written and rust, right?

07:42.480 --> 07:47.480
So ductuby is actually written in C++, and it doesn't fairly good job at that.

07:47.480 --> 07:53.480
It's really very fast. It's actually faster than Polas and Polas is written in rust.

07:53.480 --> 07:58.480
So what you can do is you can first load the data into staging tables.

07:58.480 --> 08:05.480
If you don't, when necessary, you want to go directly into your final tables, or maybe you want to upload things into some other database later on.

08:05.480 --> 08:14.480
Or maybe you just want to try things, and then, you know, maybe just for those tables away again, if you find that your initial load did not work.

08:14.480 --> 08:25.480
So this is usually a good thing. You state staging tables to basically prepare everything, and then once you're done, then you can then use the final tables.

08:25.480 --> 08:31.480
Once you have it in the database, then certain operations are very, very easy to do, which would be very cumbersome otherwise.

08:31.480 --> 08:37.480
For example, you can filter out unended data, or you can handle missing data.

08:37.480 --> 08:46.480
Even detecting missing data is easier in the database than using that, but doing that in Python and then, you know, read line by line.

08:46.480 --> 08:48.480
There are always some things that you have to do.

08:48.480 --> 08:53.480
A very common thing is, for example, you have to inspect the daytime values that you have in there.

08:53.480 --> 09:00.480
Maybe convert them to the proper time zones, or maybe add some missing parts of those daytime values.

09:00.480 --> 09:05.480
Inflot correction is something that you sometimes have to do.

09:05.480 --> 09:08.480
You can convert complete data types into other data types.

09:08.480 --> 09:11.480
You can add new columns to your data.

09:11.480 --> 09:17.480
You can have aggregate columns, for example, added to your data, so that later on, you don't have to process these aggregates over and over again.

09:17.480 --> 09:20.480
There are lots and lots of things.

09:20.480 --> 09:28.480
I just listed a few of these here on the slide that you can do in the database, transform your data and then make it work better with your application.

09:28.480 --> 09:35.480
By doing that, you gain more robustness, you regain your more performance later on.

09:35.480 --> 09:40.480
Like I said, everything can be done using SQL and I already mentioned the LLMs.

09:40.480 --> 09:42.480
Those are very good at these things.

09:42.480 --> 09:46.480
Once you've done that, you have everything in your duct-to-be.

09:46.480 --> 09:52.480
Now you can then move the data out of duct-to-be into your final database.

09:52.480 --> 09:56.480
If you're lucky, duct-to-be is already your final database, so you don't have to do anything.

09:56.480 --> 10:04.480
If not, there are quite a few adapters that basically can write directly from duct-to-be into that particular database.

10:04.480 --> 10:11.480
So you don't have to go via, for example, Python again, to read everything and then put the data into some other database.

10:11.480 --> 10:19.480
I listed a couple of databases here, very popular Postgres of course, my SQL, but they also support iceberg or data lakes.

10:19.480 --> 10:24.480
So essentially you can get the data out of your duct-to-be very easily.

10:24.480 --> 10:28.480
So it's not basically buried in your duct-to-be.

10:28.480 --> 10:34.480
Then once you're done, you remove those stages and tables again and you can then go to the next step.

10:34.480 --> 10:38.480
So let me see how much time I have left it.

10:38.480 --> 10:42.480
I'm actually speeding through this talk.

10:42.480 --> 10:51.480
So I added a few extra slides here for advanced ETL, but before going into those, do you have any questions on these things?

10:51.480 --> 10:56.480
How many of you have used duct-to-be for doing ETL?

10:56.480 --> 10:57.480
A few.

10:58.480 --> 11:00.480
So you should definitely give it a try.

11:00.480 --> 11:06.480
It's very, very easy to basically use and get into your Python process.

11:06.480 --> 11:11.480
Yeah, there's a question there.

11:11.480 --> 11:15.480
Handle what?

11:15.480 --> 11:16.480
Large databases.

11:16.480 --> 11:18.480
So essentially there are two modes in duct-to-be.

11:18.480 --> 11:24.480
You can work in memory or you can basically have a persist on disk.

11:24.480 --> 11:29.480
And then if you have really large data sets, then there's an extension called duct-to-be.

11:29.480 --> 11:35.480
Which basically turns duct-to-be into a data lake implementation.

11:35.480 --> 11:40.480
And you can easily then handle terabytes of data, no problem.

11:40.480 --> 11:45.480
Of course those terabytes of data would then be stored on disk and not the memory anymore.

11:45.480 --> 11:51.480
But it does all the administration that is needed to then handle all these files and then handle all the data.

11:51.480 --> 12:01.480
So the performance of these of the queries that it does is quite amazing given that it's just such a small implementation.

12:01.480 --> 12:07.480
Right, so some guidelines to basically help you with all this and make it maybe more efficient.

12:07.480 --> 12:16.480
So one thing in database design is that you always have to know how the data is going to be extracted from your database.

12:16.480 --> 12:24.480
And you have to think about how you put data into your database when you know how things are going to be pulled out again.

12:24.480 --> 12:34.480
Because you can optimize very easily this operation of extracting data from the database at the early stages of putting the data in there.

12:34.480 --> 12:39.480
So a typical example would be that you calculate aggregates while loading the data.

12:39.480 --> 12:44.480
So that the database doesn't have to do that anymore later on when you run your reports.

12:45.480 --> 12:50.480
Something else that you need to keep in mind is you need to optimize for things that are done often.

12:50.480 --> 12:55.480
So let's say you very often want to maybe you know extract user data.

12:55.480 --> 13:05.480
Then you should focus on these queries and optimize for those queries and not maybe for example put an index on something that is not used often enough.

13:06.480 --> 13:17.480
And both of these you know if you add extra data to make your queries run fast of course that takes more space and you have to be aware of that and so it's like it's a compromise that you have to find there.

13:17.480 --> 13:30.480
And you have to strike a good balance between these things and there's nothing much that you can really you know I can really recommend for these things you just have to try out certain approaches and see what works best for you.

13:30.480 --> 13:36.480
So one thing that takes very, very long in databases is drawings.

13:36.480 --> 13:46.480
You know what drawings are? Yes, most of you so you take two tables and then you create a third table essentially out of those two tables.

13:47.480 --> 14:01.480
Join to take very long because typically what happens is if you're lucky you can use an index for that so it runs a bit faster, but if you're unlucky it actually has to scan the complete table and then create something new that you then use as basis for your query.

14:01.480 --> 14:13.480
And the typical thing to do there is to denormalize your data denormalizing means that you don't have too many references inside your data from one table to another.

14:13.480 --> 14:20.480
Essentially you don't need the joints when you run the queries instead what you do is you copy the data to multiple tables of course.

14:20.480 --> 14:32.480
That introduces dependencies in your tables and you have to be aware of that so it has to be done with you know this in mind and you have to really be sure that you know what you're doing.

14:32.480 --> 14:40.480
But if you do and you know that these queries are going to happen and you're going to be used a lot and this can speed up your queries a whole lot.

14:41.480 --> 14:46.480
Something else that you can do in traditional databases is materialized views.

14:46.480 --> 14:49.480
Unfortunately, that DB isn't quite day yet.

14:49.480 --> 14:52.480
I hope they will add this as well.

14:52.480 --> 15:09.480
Materialized views is basically something where you essentially view is something like a select statement that is defined in sort of like the same way as you would define a table.

15:09.480 --> 15:19.480
And then behind the scenes when you use that table the database will then run these select statements and essentially gets all the data from the different other tables that you're referencing.

15:19.480 --> 15:22.480
And then make it look like you're actually working a table.

15:22.480 --> 15:31.480
And when you materialize these views the database then manages the actual tables for you instead of always running these queries.

15:31.480 --> 15:34.480
So that makes things run a lot faster.

15:34.480 --> 15:41.480
But not day yet. So I'm pretty sure they're going to add this.

15:41.480 --> 15:45.480
Something that people often do wrong is working with indexes.

15:45.480 --> 15:50.480
They basically they try to put indexes on all the different columns they have in the database.

15:50.480 --> 15:59.480
That's not a good thing to do so typically you should just start with a single index on the most commonly of course on the primary key that you have in there.

15:59.480 --> 16:04.480
And then add indexes very very carefully one by one.

16:04.480 --> 16:11.480
And the way to do that or the best way to do that is you run your query analyzer and you check which kinds of queries actually run.

16:11.480 --> 16:20.480
And then you put indexes on exactly those fields instead of just randomly defining something as you might see fit.

16:20.480 --> 16:26.480
The problem there is when you get new data into your database then these indexes always have to be updated.

16:26.480 --> 16:30.480
And especially if you're loading lots of data into your database is this becomes an issue.

16:30.480 --> 16:40.480
So what you typically do is you either you switch off the indexes if that's possible with your database or you remove the indexes and then have them re added later on.

16:40.480 --> 16:46.480
So that's very important to keep in mind when you have lots of data to insert.

16:46.480 --> 16:50.480
If you have lots and lots of data then you can petition it.

16:50.480 --> 16:56.480
It means that you basically spread it to different.

16:56.480 --> 17:08.480
To different locations essentially inductively you would not necessarily do this directly which then use this dark lake extension and the dark lake extension would do this petitioning for you.

17:08.480 --> 17:17.480
So that's also something that you can try if you have too much data to handle and you want to make things still run fast.

17:17.480 --> 17:24.480
Right, do duplicate data is pretty obvious you know we moved up duplicate that you have in your data.

17:24.480 --> 17:28.480
If you're dealing with IoT data for example you're very often get duplicate data.

17:28.480 --> 17:34.480
For example from temperature sensors sending things more than just once.

17:34.480 --> 17:39.480
So this happens depending on what kind of data source you have you might want to look into that as well.

17:39.480 --> 17:48.480
And some additional kind of things that you can do is you can pre-filter your data so you actually don't put all the data inside the final database.

17:48.480 --> 17:56.480
But instead you keep it inductively and then just load it from inductively into your target database when you actually have a use for it.

17:56.480 --> 18:02.480
So you don't throw the way but it just doesn't create load in your target database.

18:02.480 --> 18:08.480
You can pre aggregate data already went into that and you can create something that's called data sketches.

18:08.480 --> 18:10.480
Do you know what data sketches are?

18:10.480 --> 18:12.480
No, no one.

18:12.480 --> 18:16.480
So data sketches is a very nice technology.

18:16.480 --> 18:21.480
It's also not so well known as you can see here.

18:21.480 --> 18:31.480
It's a strategy to make queries that normally run very long to make some run a lot faster by giving up a bit of accuracy.

18:31.480 --> 18:39.480
So if you say that okay, I want the total number of let's say I don't know students in a certain course.

18:39.480 --> 18:43.480
And that across let's say your whole country.

18:43.480 --> 18:48.480
Then normally you would have to go and account all the students in that course across the whole country, right?

18:48.480 --> 18:54.480
So that would be like a huge and very data intense query that you have to run.

18:54.480 --> 18:59.480
If you say I just want this number to be accurate by you know to the order of like 10%

18:59.480 --> 19:03.480
then you can have a data sketch calculated.

19:03.480 --> 19:08.480
And the data sketch would then not go into all the different tables with all the different data.

19:08.480 --> 19:15.480
But instead it would do a sample of this data across all these different tables.

19:15.480 --> 19:20.480
And then collect the samples and out of the sample it then calculates a final result.

19:20.480 --> 19:24.480
That is not exact but it's exact to a certain percentage.

19:24.480 --> 19:28.480
And then you can make run things run a lot faster.

19:28.480 --> 19:35.480
If you're not if you don't need to count you know all the different in that particular case the students.

19:35.480 --> 19:41.480
If it's good enough if you have a count that okay by 10% right.

19:41.480 --> 19:47.480
So I am a bit quick with the with the talk so basically the conclusion is that

19:47.480 --> 19:50.480
Dr. B makes things a whole lot easier.

19:50.480 --> 19:56.480
You can actually do things on your notebook which you'd normally use a cloud server for.

19:56.480 --> 20:04.480
And I would really recommend you to you know make some experiments try it out and see how it works for you.

20:04.480 --> 20:05.480
Right.

20:05.480 --> 20:07.480
So that's it.

20:08.480 --> 20:17.480
Thank you.

20:17.480 --> 20:22.480
Any questions?

20:22.480 --> 20:27.480
Yes.

20:27.480 --> 20:33.480
One from the chat can duct be moved the transformation before the extraction by

20:33.480 --> 20:37.480
the query optimizing knowing the schema and a required transformation.

20:37.480 --> 20:43.480
So like lazy reading from a pocket file.

20:43.480 --> 20:51.480
Production for the extraction.

20:51.480 --> 20:53.480
Yes it actually can do that.

20:53.480 --> 20:59.480
It is very smart about these things very much like Polas for example is when you.

20:59.480 --> 21:02.480
Right.

21:02.480 --> 21:05.480
So duct can optimize these things.

21:05.480 --> 21:10.480
It's smart in in reading just the data that actually needs for.

21:10.480 --> 21:11.480
For a certain query right.

21:11.480 --> 21:13.480
So Polas does the same thing.

21:13.480 --> 21:17.480
It basically first goes into the data checks what data is there.

21:17.480 --> 21:22.480
And then tries to extract all the data that needs for particular query instead of reading everything.

21:22.480 --> 21:25.480
So that's a great optimization that you can do as well.

21:25.480 --> 21:29.480
If you don't need to hold data from your sources.

21:29.480 --> 21:31.480
Another way?

21:31.480 --> 21:36.480
Other questions from the room.

21:36.480 --> 21:37.480
Hey.

21:37.480 --> 21:39.480
Thank you for the talk.

21:39.480 --> 21:45.480
So when you said you said that denormalizing data would help you a lot.

21:45.480 --> 21:52.480
And most of the time when you want to denormalize data in a database you would do that was a trigger to keep the data in sync.

21:52.480 --> 21:55.480
Is that how you would do it also in the DB?

21:55.480 --> 22:01.480
Or would you denormalize it?

22:01.480 --> 22:04.480
So when you talk about denormalizing data.

22:04.480 --> 22:05.480
Yes.

22:05.480 --> 22:10.480
Usually when you want to do that in a database you have a trigger that will keep the data in sync.

22:10.480 --> 22:13.480
Is that how you would do it also in the DB?

22:13.480 --> 22:15.480
Or would you denormalize it?

22:15.480 --> 22:18.480
It would probably do it at low time.

22:18.480 --> 22:22.480
I'm not sure how well duct to be actually implement triggers.

22:22.480 --> 22:25.480
That I've never done that with duct to be.

22:25.480 --> 22:27.480
So it definitely works with Postgres, for example.

22:27.480 --> 22:29.480
That's how you do it in Postgres.

22:29.480 --> 22:32.480
But with duct to be, I don't know.

22:32.480 --> 22:33.480
Thank you.

22:33.480 --> 22:37.480
Are there more questions from the room?

22:37.480 --> 22:39.480
Hi.

22:39.480 --> 22:41.480
I was just wondering.

22:41.480 --> 22:45.480
Does the DB have any first party support for these data sketches?

22:45.480 --> 22:46.480
Like does it?

22:46.480 --> 22:49.480
Like built into the SQL syntax?

22:50.480 --> 22:51.480
That's a very good question.

22:51.480 --> 22:56.480
I do a specific query for it to do something.

22:56.480 --> 23:00.480
They may be extensions for duct to be doing this.

23:00.480 --> 23:04.480
So I'm not 100% sure whether there are.

23:04.480 --> 23:08.480
But their new extensions are being built basically daily.

23:08.480 --> 23:13.480
There's a whole universe of extensions being done for duct to be.

23:13.480 --> 23:17.480
And it's really amazing how the community picks up all these things.

23:17.480 --> 23:19.480
It tries to make everything work with duct to be.

23:19.480 --> 23:24.480
Because it's just an amazing project to solve things, right?

23:24.480 --> 23:26.480
And so I'm pretty sure there is something.

23:26.480 --> 23:27.480
Awesome.

23:27.480 --> 23:28.480
That's really good.

23:28.480 --> 23:29.480
Thank you.

23:29.480 --> 23:30.480
Thank you.

23:34.480 --> 23:37.480
Yeah, I just have a question about the use case.

23:37.480 --> 23:42.480
Just to know if it's a good idea about the idea to use duct to be in that case.

23:43.480 --> 23:48.480
So for example, for a dump and restore of database that is not duct to be,

23:48.480 --> 23:50.480
would it be a good or bad idea?

23:50.480 --> 23:54.480
Like mySQL, duct to be reads in it, and then stores in parquet,

23:54.480 --> 23:57.480
Macdoca files, something, and then same for restore.

23:57.480 --> 24:01.480
And then second use case, transforming into, for example, mySQL.

24:01.480 --> 24:07.480
So migration from mySQL to PG, could duct to be be the ETL in between these two.

24:07.480 --> 24:09.480
Yes, definitely.

24:09.480 --> 24:10.480
Because it has to.

24:11.480 --> 24:16.480
It has direct interfaces to, for example, mySQL or Postgres, right?

24:16.480 --> 24:18.480
And it can read parquet files directly.

24:18.480 --> 24:21.480
So it's like the ideal thing to put in between.

24:21.480 --> 24:23.480
So basically to transform from one to another.

24:23.480 --> 24:24.480
Thank you.

24:28.480 --> 24:31.480
That was to talk by Mark Andre.

24:31.480 --> 24:33.480
Give him some applause, please.

24:33.480 --> 24:34.480
Thank you.

24:34.480 --> 24:37.480
Thank you.

24:37.480 --> 24:43.480
And then we have five minutes for switching room and off to bed.

