How to add code around the component |
|
First of all let us say that if this guide should not be clear to you, feel free to contact our support team for any doubt or curiosity you may have: we usually answer all of our customers questions in less than 24 hours. Note that the purpose of this guide is to give a taste of the main features available inside Audio Waveform Analyzer for .NET.
As an integration to this guide several examples of use of this component can be found inside the "Samples" folder: if during the setup phase you left the installation directory to its default, you will find them inside "\Program Files\MultiMedia Soft\Audio Waveform Analyzer for .NET\Samples".
As seen inside the How to add the component to your projects tutorials, behaviour and look of the component can be determined at design time but a lot of settings and operations can be performed at runtime through your own code.
For those using the product within the C++ environment, the download package includes a module named WavMmsEngDef.h that will help you with mnemonic constants required by the control. This module can be found inside the Include directory created by the setup package (the default is \Program files\MultiMedia Soft\Audio Waveform Analyzer for .NET\include).
Before starting doing anything else, the component needs to be initialized: for this purpose it's mandatory a call to the InitWaveformAnalyzer method; the best place to call this initialization method is usually the container form loading function: for example, when using Visual Basic 6, it will be the Form_Load subroutine while, when using a MGC dialog based project, it will be the OnInitDialog function which manages the WM_INITDIALOG message. The main purpose of calling the InitWaveformAnalyzer method is to synchronize the component with its container form.
After having initialized the component, we can perform sound's analysis through different ways:
• | on sound files and audio tracks of video clips through AnalyzeSoundFromFile method |
• | on RAW audio files through the AnalyzeSoundFromFileRaw method |
• | on PCM audio streams provided by an external entity through the sequence of ExternalSoundAnalysisStart, ExternalSoundAnalysisPushData and ExternalSoundAnalysisStop methods: in this case the waveform analysis will occur when the ExternalSoundAnalysisStop method is invoked. |
When using the AnalyzeSoundFromFile and AnalyzeSoundFromFileRaw methods, by default the full audio file is analyzed: in case you should only need to analyze a specific portion of the audio file, you could limit the analyzed range through the AnalyzeSoundFromFileRangeSet method.
In all cases, analysis can be more or less accurate depending upon the resolution set into the nResolution field of the WANALYZER_GENERAL_SETTINGS data structure: higher resolutions will allow a better quality because more peaks will be detected: as a side effect, higher resolutions will require more memory. When the results of an analysis are no more needed, the involved memory can be released through a call to the FreeMemory method. The analysis can be aborted at any time through a call to the AnalyzeAbort method.
After starting sound analysis, the control will allow the container application to stay up-to-date about the analysis advancement through the following events:
• | WaveAnalysisStart: fired when the sound's analysis begins. |
• | WaveAnalysisPerc: fired several times during the analysis in order to inform the container application about the percentage of advancement. |
• | WaveAnalysisDone: fired when the sound's analysis is completed |
After completing the waveform analysis, you can request the level, expressed in percentage, of the highest and of the lowest peaks available inside a certain range of the waveform through the GetMinMaxPeakLevelsForRange method.
At this point it's possible creating a graphical representation of analyzed sounds in one of the following ways:
Mode 1: Dynamic waveform visualization
Mode 2: Creation of bitmaps in various graphic formats
Mode 3: Rendering on a graphic Device Context (HDC)
Mode 1: Dynamic waveform visualization
By default the Waveform Analyzer user interface will appear as in the picture below where the top waveform represents the left channel of a stereo sound while the bottom waveform represents the right channel of a stereo sound (sounds in mono are rendered using one single waveform):
Once the waveform analysis has been performed, you can immediately display the full waveform on the visible area of the analyzer through a call to the SetDisplayRange method by passing 0 and -1 respectively to the nBeginPosInMs and nEndPosInMs parameters.
In case the waveform analyzer should be used in conjunction with a multimedia player implemented through a component like our "Audio DJ Studio for .NET" or through third-party libraries like Leadtools Multimedia SDK, BASS, FMOD or through Microsoft's frameworks like Media Foundation and DirectShow, the component allows displaying the current playback position through a vertical line whose position can be modified in real-time through the PlaybackPositionSet method: usually the best place to insert this call is a timer routine with a very small delay (usually a value between 50 and 100 milliseconds) which obtains the current playback position from the multimedia component and passes it to the waveform analyzer component.
The Waveform Analyzer can be:
• | refreshed through the RefreshDisplay method |
• | scrolled through the ScrollDisplay method: scrolling can be also performed panning the horizontal scrollbars with the mouse. |
• | it's also possible replacing the standard cursors, used for scrolling and resizing, operations through the SetTrackerCursors method |
The graphical rendering of this control is fully customizable and you can modify colors used to render the various elements and hide/show all of the elements surrounding the waveform representation; for this purpose you can gain access to the various settings through the following set of data structures using the provided methods:
• | general settings are managed through the WANALYZER_GENERAL_SETTINGS data structure accessible through the combination of the SettingsGeneralGet and SettingsGeneralSet methods. |
• | rulers rendering settings are managed through the WANALYZER_RULERS_SETTINGS data structure accessible through the combination of the SettingsRulersGet and SettingsRulersSet methods. |
• | scrollbars rendering settings are managed through the WANALYZER_SCROLLBARS_SETTINGS data structure accessible through the combination of the SettingsScrollbarsGet and SettingsScrollbarsSet methods. |
• | waveform rendering settings are managed through the WANALYZER_WAVEFORM_SETTINGS data structure accessible through the combination of the SettingsWaveGet and SettingsWaveSet methods. |
The Waveform Analyzer allows selecting/deselecting portions of the displayed waveform in two different ways:
• | through code using the SetSelection method. |
• | through mouse acting directly on the waveform area: simply press the left button, drag the mouse position keeping the left button pressed and then release the left button: the selected area will appear with its colors inverted and with two tracker handles that will allow further manual resizing of the selected range; this selection can be also moved performing a mouse panning. After these manual operations the control can obtain the selected area through the GetSelection method. |
The Waveform Analyzer allows zooming the displayed waveform in two different ways:
• | through code using one of the following methods: SetDisplayRange, ZoomIn, WaveformAnalzyer.ZoomOut, ZoomToFullSound, ZoomToSelection. After zooming in and out, you can obtain the current sound's range displayed on the Waveform Analyzer calling the GetDisplayRange method and to obtain the length of this range in milliseconds and pixels using the GetDisplayWidth method. |
• | through mouse acting directly on the tracker handles of one of the available horizontal scrollbars as seen on the picture below: |
Whenever a change occurs on the user interface, the container application is notified through one of the following events:
• | WaveAnalyzerSelectionChange: this event occurs when a selection/deselection is performed |
• | WaveAnalyzerDisplayRangeChange: this event occurs when a new range within the sound is displayed |
• | WaveAnalyzerDisplayWidthChange: this event occurs when the control is resized horizontally |
In order to allow management of mouse interaction, you can catch the WaveAnalyzerMouseNotification event which reports the exact position where the mouse was pressed on the waveform: accurate positioning is guaranteed also when the waveform is zoomed. Mouse interaction can be enabled/disabled through the MouseSelectionEnable method. During a playback session the mouse interaction is automatically disabled and an eventual call to this method will be ignored.
Each graphical element composing the Waveform Analyzer visualization (scrollbars, time rulers, etc.) is referenced by a specific rectangle on the screen: coordinates and dimensions of this rectangle, expressed in pixels, can be obtained through the GetRectangle method.
Each time the graphical rendering of the Waveform Analyzer is completed, the container application receives a WaveAnalyzerPaintDone event: this event will pass as parameter the HWND of the Waveform Analyzer window, allowing to perform further custom graphic rendering on the Waveform Analyzer surface; for this purpose you can obtain the handle to the device context (HDC) of the Waveform Analyzer through the GetDC Windows API.
The Waveform Analyzer automatically manages two vertical lines, one for displaying the current position during a playback session and another one for displaying a position, selected through the mouse, within the loaded sound: the look of these two lines can be modified through the WANALYZER_GENERAL_SETTINGS data structure mentioned before.
The waveform analyzer allows adding further graphical items like vertical lines, horizontal lines and wave ranges: for details about graphic items management you can refer to the How to add graphic items to the Waveform analyzer tutorial.
Mode 2: Creation of bitmaps in various graphic formats
After catching the WaveAnalysisDone event, it will be possible requesting to the control the creation of the waveform's bitmap through a call to the BitmapViewSaveToFile and BitmapViewSaveToMemory methods. It's important to note that the WaveAnalysisDone event will report the exact number of peaks detected inside the loaded sound and the exact duration in milliseconds of each peak. When generating the bitmap for the full sound's waveform, the reported number of peaks will be exactly equal to the width in pixels of the bitmap. In case you should create a view of a given sound's range, you could obtain the number of pixels needed to display the waveform's view through a call to the BitmapViewGetWidth method.
The generated waveform's bitmap will be rendered using colors set into the colorWaveLinePeak, colorWaveLineCenter and colorWaveBackground fields of the WANALYZER_WAVEFORM_SETTINGS data structure.
Below you can see five bitmaps generated using the same sound (whose duration is around 2.4 seconds length) but with five different resolutions (just for your information the song used for generating these bitmaps is available on this link); all these bitmaps have been created using the same height of 100 pixels:
Resolution set into the nResolution property |
Generated bitmap |
|
|
WAVEANALYZER_RES_MAXIMUM |
|
WAVEANALYZER_RES_VERY_HIGH |
|
WAVEANALYZER_RES_HIGH |
|
WAVEANALYZER_RES_MIDDLE |
|
WAVEANALYZER_RES_LOW |
As mentioned before, you can also generate a "view" of a defined portion of the loaded sound: on the bitmap below you can see the original song in its full length (with WAVEFORM_RES_MAXIMUM resolution)
and below you can see the view of the same song for the range between 500 and 1500 milliseconds
On the samples above we have used a very small song limited to 2.4 seconds: in case you should be in need of displaying the full waveform of a longer song on a space much smaller than the total number of required pixels, the BitmapViewSaveToFile and BitmapViewSaveToMemory methods will shrink the waveform in order to fit exactly inside the available space; in the example below you can see a 3 minutes song fitting inside a 455 pixels wide bitmap: the original full length bitmap, using the WAVEFORM_RES_MAXIMUM resolution, would require a width of 243,683 pixels.
VERY IMPORTANT: When using high resolutions and big sound files, keep count that the Windows operating system has limits on the size of bitmaps creation; this means that it would be better avoiding the creation of very large bitmaps: in this case it would be a better approach splitting the total song's bitmap into several smaller bitmaps, for example creating bitmaps whose width in pixels doesn't exceed the current screen width.
Mode 3: Rendering on a graphic Device Context (HDC)
After catching the WaveAnalysisDone event, it will be also possible rendering the waveform directly inside a graphical device context (through its handle or HDC) using the BitmapViewDrawToHdc method; this may be quite useful if you should need to perform real-time rendering of the waveform on the screen. Although this could be also used for creating a scrolling waveform during playback, we suggest checking the How to scroll the sound's waveform during playback tutorial for further details about managing a scrolling waveform.
When rendering the waveform on a device context, you have the option to choose if the rendered waveform must contain eventual custom graphic items also (for details about graphic items management you can refer to the How to add graphic items to the Waveform analyzer tutorial) by setting the bShowGraphicItems parameter of the BitmapViewDrawToHdc method to "true" or "false": if this parameter should be set to "true", you may choose which combination of items should be rendered by setting the rendering mask through the BitmapViewGraphicItemsMaskSet method.