WEBVTT

00:00.000 --> 00:10.000
Hello, I'm good afternoon, everyone. I am ASAP.

00:10.000 --> 00:18.000
Currently, I will talk about building wireless media controller with LXI, ESP32 and ATOM3M.

00:19.000 --> 00:26.000
So, Nanas is my own brand for my products.

00:26.000 --> 00:30.000
So, I will introduce myself. Again, I'm ASAP.

00:30.000 --> 00:40.000
I'm based in Tallinn, Estonia. My daily job is as a web developer and also I'm a product engineer.

00:41.000 --> 00:48.000
Designing and manufacturing. You are back in modules and custom media controllers.

00:48.000 --> 00:53.000
So, everything is under the plan of Nanas and this is just so away me.

00:53.000 --> 01:05.000
So, I'm like a chief everything of itself, I'm so jumping from designing, coding, selling, packing, shipping.

01:05.000 --> 01:14.000
So, the problem, why I create this traditional media user value use wired connector.

01:14.000 --> 01:21.000
There is some solution also wireless media solution, but they are using Bluetooth, which is I don't want to use.

01:21.000 --> 01:29.000
Because I want to create a media controller that can integrate with local network, my Wi-Fi local network.

01:29.000 --> 01:37.000
And they are usually popular ecosystem. And also because I'm a web developer.

01:37.000 --> 01:44.000
If I do the Wi-Fi embedded with CNC++, it feels really complex for me.

01:44.000 --> 01:52.000
So, I think my brand cannot catch up with it. And it has to be using manual memory management.

01:52.000 --> 02:04.000
So, it's really difficult to think about concurrency, like, what happened if one knob or a failure or any other things need to do.

02:04.000 --> 02:08.000
So, I think it's really complex in the C++ for me.

02:08.000 --> 02:27.000
And I usually coding web with LXL. So, I'm trying to find, searching the solution is there any possibility to run LXL in my controller like ESP32 and iPhone DMVM.

02:27.000 --> 02:37.000
So, this is the solution that I create, I call it media mesh. It is open source design using ESP32.

02:37.000 --> 02:45.000
And it has to be compatible with EuroRach hardware, meaning at least the power hender is the same.

02:45.000 --> 02:51.000
So, I can put it on the rack of this in. So, that's why I create here using power hender.

02:51.000 --> 03:09.000
But it can also be powered using battery pack like this. And it sends me the data of the Wi-Fi via UDP.

03:09.000 --> 03:21.000
So, what is atom-fem? Actually, this is a subset of RLang-fem design for embedded system. The key feature is it has lightweight process.

03:21.000 --> 03:31.000
And it has also a message passing between the process. And it can interface with GPI or I2C SPI and URT.

03:31.000 --> 03:41.000
And, after this concurrency model, the program is able to write and understand. And this is the most important.

03:41.000 --> 03:54.000
It's support Wi-Fi networking in ESP32, which is, I need it. And it's support with this platform. ESP32, STM32, Raspberry PPCO, UNIX, even wasm.

03:54.000 --> 04:12.000
But, the threat of atom-fem is it has limited-lixer standard library. So, in the firmware itself, sometimes I had to call MLang library directly from a mixer.

04:12.000 --> 04:25.000
And, this is the hardware that I use. C3, the CPU is RS532 bit with 400 kilobit of RAM, Wi-Fi.

04:25.000 --> 04:39.000
And it has quite a decent IO, like 5 ADC and 10GPI, and the cost is really minimal, like 2 euros to 5 euros depends where you bought it.

04:39.000 --> 04:48.000
And, the power option, it's using USB-C, the power header for a 5 volt, or you can use battery portable, like this.

04:48.000 --> 05:03.000
And, this is the architecture of the MIDI mesh, the device. It has, for this POC, I have forlnot and 1-fader.

05:03.000 --> 05:16.000
It's an MIDI via UDP to port for a thousand. And, I also had desktop receiver. It is like a small program running in CLI.

05:16.000 --> 05:29.000
It is also written in LXL. It connects to the virtual MIDI port of the laptop. And, it also, it will listen from port for a thousand.

05:29.000 --> 05:39.000
And, passing the MIDI data to DAO or to send software or to any thing that you wrote the MIDI from your laptop.

05:39.000 --> 05:47.000
And, this is the module architecture. So, basically, there is a main application in the firmware.

05:47.000 --> 05:56.000
And, I create a module like not module, Wi-Fi modules, which module and MIDI generation module, and some configuration.

05:56.000 --> 06:04.000
And, basically, it's just an LXL module.

06:04.000 --> 06:24.000
And, when I say about process based and concurrency, meaning every, every hardware, every thing in this hardware is just an LXL or an Erlang process.

06:24.000 --> 06:35.000
So, basically, we have five independent process here. Like, FVNOP has their own Erlang process. And, FVNOP also same.

06:35.000 --> 06:44.000
And, it, they are passing the message between the process to the main process.

06:45.000 --> 06:54.000
FVNM is it passing. And, the clock itself is like 150 millisecond polling cycles. So, it will always reading the value.

06:54.000 --> 07:07.000
And, this is how the laptop says looks like. When it is looping to reading the value of the potentiometer, it uses recursion.

07:07.000 --> 07:15.000
And, we have pattern matching. So, it has a clean flow, we can follow the poetry really easily.

07:15.000 --> 07:28.000
And, since LXL also immutable, there is no risk condition. And, the good thing is, because the atom VM is subset of Erlang VM.

07:28.000 --> 07:43.000
So, it's process will fails independently. So, let's say, the laptop or the FVNM here fail. It won't make the system go down. It's just one process will fail.

07:43.000 --> 07:51.000
And, this is about Wi-Fi and networking. It's support station mode and AP mode.

07:51.000 --> 08:00.000
But, station mode basically, it will be a client connected to the access point or the device itself will be an access point.

08:00.000 --> 08:12.000
For this demo, I am using this as an access point because I will just want to connect directly to this device.

08:12.000 --> 08:25.000
And, originally, there is a switch to switch the operation between station mode and AP mode. But, I just took it off yesterday because it's too wobbly.

08:25.000 --> 08:53.000
So, I just hugged it into an AP mode. But, actually, in the firmware, to read the station and AP mode, I am using again, it will have a easy to read with, and also, it has a clean state transition between the on and off between the mode.

08:54.000 --> 09:16.000
And, this is the module for midi message generation. Basically, it will mapping the analog data, the raw ADC from potentiometer into a midi data from 0 to 127.

09:16.000 --> 09:29.000
So, if in the future, I want to support midi to 0.0, it will just, I can map it also to midi to 0.0 value.

09:29.000 --> 09:37.000
And, I am using the 10k potentiometer here, and every knob has their own CC mode.

09:37.000 --> 09:50.000
I am hard-coded from CC 75 to CC 79 because it's a general proposed CC number.

09:50.000 --> 10:06.000
And, to give the firmware and deploy it to the hardware, it's only using mixed tasks. So, mixed atom frame install will install the latest image of atom VM.

10:06.000 --> 10:25.000
And, after that, we can use packed beam to compile the code into the firmware and just flash it using mixed flash.

10:25.000 --> 10:35.000
I will demo the hardware after the slide finish, but basically, this is the overview of the demo.

10:35.000 --> 10:42.000
A midi message, connect with the laptop via UDP and in access point mode.

10:42.000 --> 10:53.000
And, I will control cardinal modular scenes, but I don't want to explain what is cardinal because we have talked about cardinal after this.

10:53.000 --> 11:06.000
It will also demonstrate the low latency and how multiple knobs will do, and I will demo it with the device.

11:06.000 --> 11:17.000
And, this is some exploration for the future, because this can be integrated with the Wi-Fi network.

11:17.000 --> 11:24.000
It can be as a work as a distributed midi controller.

11:24.000 --> 11:37.000
Like, imagine the controller not just to control the synthesizer or the software, but it can also control maybe stage lighting,

11:37.000 --> 11:46.000
or maybe an artist can also be do a collaboration with the audience, making music together,

11:46.000 --> 11:55.000
and doing multi-device synchronization, everything is possible here.

11:55.000 --> 12:16.000
So, basically, from the experience building this, why I am using LXL, the pros is concurrency model with real time and also with the nature of hardware.

12:16.000 --> 12:26.000
There's a lot of productivity, and the OTP patterns for reliability, and the challenge here is the smaller ecosystem.

12:26.000 --> 12:41.000
LXL in embedded and atom-VM has smaller ecosystem, and the C++ are even micro-pitan, and need hardware specific niffs if needed.

12:42.000 --> 13:00.000
And the takeaway here is functional programming is actually works on micro-controllers, beam process, naturally, to model embedded systems, atom-VM also brings LXL to IoT or hardware.

13:00.000 --> 13:12.000
And wireless midi enables can be like enables new music applications, and it makes it accessible because I'm using Open hardware and Open software.

13:12.000 --> 13:14.000
So, this is Samarisaurus.

13:14.000 --> 13:27.000
The firmware itself, I host in GitHub, you can just go there, and there is a desktop receiver there, and the license is open source.

13:27.000 --> 13:40.000
So, thank you, and we can do the demo.

13:40.000 --> 13:51.000
So, this is, this one will control the mixer of this patch, where is the engine, oh yeah.

13:51.000 --> 13:59.000
So, I'm starting, let's make something.

13:59.000 --> 14:05.000
So, mix the channel one.

14:05.000 --> 14:12.000
Now, I'm adding some scene.

14:12.000 --> 14:27.000
And this one, for controlling the power of the oscillator.

14:27.000 --> 14:35.000
And it's low latency.

14:35.000 --> 14:54.000
Let's make it more interesting, adding the snails.

14:54.000 --> 15:03.000
Like I said before, it can be like concurrent between the knobs and the fader.

15:03.000 --> 15:13.000
Like I'm controlling this, together.

15:13.000 --> 15:18.000
Let's add some delay.

15:25.000 --> 15:35.000
So, everything is still like working in a real time.

15:35.000 --> 15:42.000
Okay, so it's done.

15:42.000 --> 15:57.000
Okay, but I cannot control the volume.

15:57.000 --> 16:05.000
I will try to make it like, okay, I think this is fine.

16:05.000 --> 16:14.000
Basically, it can control any software scene, but for this one, I'm using a little.

16:14.000 --> 16:22.000
And imagine if we can control the scene together with the audience.

16:22.000 --> 16:28.000
And you can also build some, like, it's modular, modular, maybe controller.

16:28.000 --> 16:36.000
Making it as, let's say, like, a track channel of mixer, something like that, it's everything is possible.

16:36.000 --> 16:50.000
So, yeah, does it talk for me and maybe we have any questions?

16:50.000 --> 17:00.000
Thanks very nice to you.

17:00.000 --> 17:04.000
Can you stop the music?

17:04.000 --> 17:08.000
I will stop it.

17:08.000 --> 17:13.000
Now you'll see why I want to do the music production, and we need something like this.

17:13.000 --> 17:19.000
We have a few questions, I'll start from.

17:19.000 --> 17:20.000
Thanks a lot, it's really nice to see.

17:20.000 --> 17:24.000
I've also played in the past with ESP and doing media over Wi-Fi.

17:24.000 --> 17:30.000
So, I was wondering, because you're using UDP, why you're not using RTP media, right?

17:30.000 --> 17:37.000
Yeah, I'm not using RTP media, but in Atom, we have that I use.

17:37.000 --> 17:43.000
There is no MD and S module yet for us, that's one.

17:43.000 --> 17:50.000
And the second one is, I think, broadcasting is the simplest one here.

17:50.000 --> 17:52.000
And you can disconnect everything.

17:52.000 --> 17:56.000
Yeah, the nice thing with RTP media is that there's already a lot of standardized libraries to,

17:56.000 --> 17:57.000
Yes.

17:57.000 --> 18:01.000
Your laptop already has it built in, so you don't need to run an extra service, your discovery,

18:01.000 --> 18:03.000
built in, etc.

18:03.000 --> 18:08.000
And now, maybe 2.0, it has its own UDP stack, I think, or the limitation.

18:08.000 --> 18:14.000
I think that's what I'll also, but maybe 2.0.

18:14.000 --> 18:18.000
Do we have any other questions?

18:18.000 --> 18:21.000
I'm just wondering of the use of UDP.

18:21.000 --> 18:25.000
What happens if the packet cannot be received by the underside?

18:25.000 --> 18:27.000
Because it's comparing the current value.

18:27.000 --> 18:30.000
That means you can, you know, one value is not sent.

18:31.000 --> 18:35.000
Yeah, it will just last, simply last.

18:35.000 --> 18:41.000
But you can, since usually in the music, you are, you're always tweaking,

18:41.000 --> 18:45.000
It's, and I'm glad that.

18:45.000 --> 18:46.000
Okay.

18:46.000 --> 18:50.000
We end one more question here.

18:50.000 --> 18:53.000
Thank you very much for the presentation.

18:53.000 --> 18:58.000
Regarding the future improvements that you are saying earlier,

18:58.000 --> 19:07.000
I've heard about how five that could help working on multiple devices,

19:07.000 --> 19:09.000
simultaneously over the same network.

19:09.000 --> 19:10.000
I don't know.

19:10.000 --> 19:14.000
If you already heard about it, and maybe talk about using it.

19:14.000 --> 19:20.000
Yeah, I'm thinking about, like, having a multiple device, something like that.

19:20.000 --> 19:22.000
It's possible.

19:22.000 --> 19:26.000
We have one last question, and then we wish you presentation.

19:26.000 --> 19:27.000
Okay.

19:27.000 --> 19:28.000
Thank you.

