News:

SMF - Just Installed!

Main Menu
Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - respect

#1
Hello,

thank you. It helped a lot, the code worked perfectly.

Best Regards
Laszlo
#2
Can you please provide sample code:
how to access the memory buffer containing the sound recording in memory? I would like to store it as a byte array in a database.
#3
I am looking for a sound recording library able to perform microphone recording directly into a zip file.

Looking into the online documentation, I need something like the audioSoundRecorder1.RecordedSound.SaveToZipFile an also the RecordedSound.RequestExportToZipFile method.
I also need the decompression part to be able to load the sound into a WaveFormEditor directly from a zipfile.

This shouldn't be hard, see following code http://stackoverflow.com/questions/19140113/how-can-i-write-blob-datas-to-zip-and-download-it-in-c

Could you please also add a slightly modified version of the demo application "Simple MP3 Recorder with FTP capabilities" for demonstrating zip and unzip capabilities.

Reason: zip eg. with zero compression can be used as a virtual file system for storing many sound files eg. for a game.
#4
Hi Severino,

neither do I. I am planning to build up this competence. As far as I have read, task parallelism views the problem as a stream of instructions that can be broken into sequences called tasks that can execute simultaneously. Goal would be to speed up sound analysis by utilizing parallel cores. Maybe this would be something to give to a trainee / student?

In my opinion the design is more or less already conform to the  Event-based Asynchronous model(EAP), where async operations like waveform analysis are represented by a method/event pair.

These operations could be wrapped into tasks. The task parallel library provides eg. dataflow components based on these tasks. Dataflow is useful when the application needs to process the sound waveform images as they become available. I think of these components as hardware circuit components connected together.

Best Regards
Laszlo
#6
Hi Severino,

what is your recommended model for using waveform editors in a parallel processing scenario?
I am thinking about analyzing sounds in a producer - consumer scenario.

I would like to browse sound files in an explorer like user interface, showing a preview for the selected file.
This would mean either generating and serializing and deserializing a few hundred waveform editor objects to disks
or
to create a waveform cache.

Are you planning to add some small app samples with best practices for parallel processing?

Best Regards
Laszlo
#7
Hi,

thank you for this info. I am looking forward to trying it.

Best Regards
Laszlo
#8
I want to create a horizontal audio timeline for a group of vertical sound file rows.
The timeline consists of a table (treeview) with several rows. In the top header row a waveform analyzer control is scaled down to the size of the ruler without any waveform area displayed. In the other rows of the table horizontal bars will represent the sound files. I generate the bars based on the duration of the files. I will take care for scrolling the bars in the treeview myself.

Important: I want to avoid loading or analyzing the sound files. This way I can win performance and user experience.
I just want to create a visual preview based on the duration of the sounds in the timeline.
So want to display the long scrollable top ruler with the length of the longest sound. Or I even want to add some offset in these rows.

Currently the only way to create the ruler in the top header is by creating a long dummy silence and performing a sound analysis. The other way would be to mix / load with an initial offset position. Both has a performance penalty due to the creation of the long dummy silence and the sound analysis.

I would like the ruler of the sound editor to be displayable and configurable before any sound was loaded or analyzed.

I would like to request the mentioned "virtual analysis" feature in the following 2 products:
-Audio Sound Editor for .NET
-Audio Waveform Analyzer for .NET

Benefit: performance, especially if you want to browse quickly among a set of files, while the timeline for them needs to be updated quickly. A parallel rendering of the waveforms could take place in a background thread. On user request a more detailed representation can be shown.

A more convenient way would be to show a rough bar representation in the waveform analyser itself. If the background task is ready, and the user requests, it can be changed to show the fine representation.