Integrating a custom device into MFormats or MPlatform

To add support for your device you can use MFDeviceTest project located in  {MFormats SDK directory}\Samples\C++\MFDeviceTest (for instance, C:\Program Files (x86)\Medialooks\MFormats SDK\Samples\C++\MFDeviceTest). 

MFDeviceTest project is a pattern that includes all steps and contains essential code required to integrate custom device into MFormats and MPlatform.

You can either modify it or create a new project by copying MFDeviceTest code and properties.


To integrate your device with MFormats, you must write CLSIDs of renderer and capturer classes into Registry.

These CLSIDs are written in MFDeviceTest.idl file (you can change them if you create integration by modifying MFDeviceTest project).

Registry path for renderers: HKEY_CURRENT_USER\Software\Medialooks\MFormats\MFRenderer\extra_devices.clsids

For capturers: HKEY_CURRENT_USER\Software\Medialooks\MFormats\MFLive\extra_devices.clsids

Record format: {<Device 1 CLSID>},{<Device 2 CLSID>}, <etc.>

You can add CLSID of MFDeviceTest to make sure that custom device integration works.


Important: for debugging switch off device_sharing in the registry:

(HKEY_CURRENT_USER\Software\Medialooks\MFormats\MFLive or HKEY_CURRENT_USER\Software\Medialooks\MPlatform\MLive), otherwise breakpoints won't be hit.


Video capture and playback implementation usually includes the following steps



1. Enumerating devices and collecting their basic properties such as name, type, channels etc. which can be obtained via device SDK.


2. After loading device list the application requests device initialization, and at this point you should provide code for adjusting the device before capture/playback (for instance, setting channel, routing, allocating memory and other preparations).


3. (Optional) Adding XML properties for your device, which can be adjusted by the user while the program is running. 


4. Handling capture/playback in a separate thread which runs inside a loop processing one frame at each iteration. It's up to you to fill each frame with video and ancillary data from your device in capture or to extract data from frames to your device in rendering. For instance, a basic way to do this in capture is to obtain pointers to video and audio data and feed them to IMFFactory::MFFrameCreateFromMem(...) method then put output frame to thread callback. Please be advised that pointers must comply with M_AV_PROPS structure passed to this method. Before feeding frame to callback, you can use different methods of frame processing such as overlaying, scaling, stretching etc. Playback is somewhat reverse process which should take frame data from Medialooks API in a predefined format and feed them to device memory.


5. Releasing allocated memory and resources and cleanup.


Capturing video


1. Open your project. Add include and lib path to SDK of your custom device in Properties -> Configuration properties -> VC++ directories. Don't forget to specify required .lib and .dll files either with #pragma comment directives or in project properties.

2. Open header file MCaptureTest.h.

3. Find method MCaptureTest::EnumDevicesXML(IN XMLParse::XMLNodePtr _pXMLProps, OUT CSimpleMap2<CComBSTR, MCaptureBase::TPtr>* _pMapDevices). Inside this method, you must fill <b>_pMapDevices</b> container with name and properties of input channels of your devices (usually device, channel and type indices). Thus, each device channel is considered as separate device. The method must return the number of devices' input channels.

4. Open method MCaptureTest::Capture_Start( IN OUT M_AV_PROPS* _pAVProps, IN void* _pvInstance, IN PFNCAPTURECALLBACK _pfOnFrame ). Inside you can adjust properties of your video (M_AV_PROPS) and audio or leave them default. Call UpdateMVidProps(M_AV_PROPS* pAVProps); to auto fill property fields if video format and memory type have been defined.

5. Inside MCaptureTest::Capture_Open(XMLParse::XMLNodePtr _pXMLProps) method you can initialize your device before capturing video. Respectively, MCaptureTest::Capture_Close() method can be used to close your devices and free allocated memory.

6. The actual capturing takes place in RunThread() method. Each time when you get a frame from your device you should pass its properties as well as video and audio data to m_cpMFFrames->MFFrameCreateFromMem(...).


Video playback


This process is somewhat reverse to capturing. You should get video and audio from Medialooks API and feed it to your 

device.


1. You should write the same code in MCaptureTest::EnumDevicesXML(IN XMLParse::XMLNodePtr _pXMLProps, OUT CSimpleMap2<CComBSTR, MCaptureBase::TPtr>* _pMapDevices) method but in this case output channels must be obtained.


2. In method MRenderTest::Render_Start(/*[in]*/ M_AV_PROPS* _pAVProps, /*[in]*/ XMLParse::XMLNodePtr _pProps) adjust properties of your device according to _pAVProps.

3. You can initialize and deinitialize your device in methods MRenderTest::Render_Open(/*[in]*/ XMLParse::XMLNodePtr _pProps) and MRenderTest::Render_Close() respectively.

4. In RunThread() method use the following code to obtain frame from Medialooks API:

CComPtr<IMFFrame> cpFrameOut;
HRESULT hr = GetNextFrame( &cpFrameOut );
MF_FRAME_INFO mFrmInfo = {};cpFrameOut->MFAllGet( &mFrmInfo );

Audio and video data will be copied to mFrmInfo strucure. After that pass it to your device.