Embracing the open, despite its flaws... why PEG should learn to love FFmpeg

We encourage users to post events happening in the community to the community events group on https://www.drupal.org.
kreynen's picture

I feel like I'm posting too often to the group, but there is so much information we are trying to share. Since there is a conversation about FFmpeg on the ACM Announce list and it came up in a call with Austin, I thought I should try to get some of what we've been doing with FFmpeg posted. Most of the discussion on the ACM list has been about applications that wrap FFmpeg up with a graphic interface. That's a great introduction, but to really leverage FFmpeg the PEG community is going to have start developing some collective knowledge about FFmpeg's command line configuration options.

The Open Media System leverages FFmpeg, but Brian Hiatt has had to put a lot of time and effort into modifying Drupal to be able to deal with the large, broadcast quality files the PEG community works with. In addition to large files, Brian invested a lot of reverse engineering and testing FFmpeg settings to generate files that comply with the ACM's (secret) standard for MPEG2. The command line configuration we're using with Media Mover is...

-i %in_file -acodec ac3 -ar 48000 -ab 448k -vcodec mpeg2video -f dvd -copyts -s 720x480 -g 18 -b 8000000 -maxrate 9000000 -minrate 0 -bufsize 835008 -packetsize 2048 -muxrate 10080000 %out_file

If you aren't using Media Mover, you'll need to replace %in_file and %out_file. This configuration produces a MPEG2 that has worked on both Princeton and Leightronix.

FFmpeg came up last week on our call with channelAustin. Kevin King from Root6 (developers behind the Content Agent encoding wonder box) joined us on the call. The big question we were trying to answer was whether Content Agent had an API/web service other systems can communicate with or is this product going to be a step back to the drop box/hot folder workflow that DOM had been using with the Canopus/Grass Valley Pro Coder? Content Agent offers a number of really nice graphical user interfaces for setting up different workflows, but as far as interfacing with other systems Content Agent is a 'dumb' encoder. By 'dumb' I mean any other system can give it a file, but while that file is encoding the other system is just left waiting. This is 'dumb' like http requests were dumb before AJAX... way back in Web 1.0. Now, asynchronous calls from javascript allow much more functionality than the basic Get and Post requests the web was originally limited to. The same type of improvements in the standards that allowed the development of browser based applications like Gmail, are also available between servers. FFmpeg already supports this type of asynchronous communication with other systems.

The good news for Austin is Root6 is in the process of adding a web service layer to Content Agent that should give us something to work with in the future, but it doesn't look like that interface will be available in March. That means channelAustin will need to run FFmpeg on video being processed through the Open Media System, but will also be using Content Agent to process video entering their workflow through other channels.

It says a lot about me that I could care less who won the Super Bowl, but I'm actually excited about the chance to run these encoding systems side by side.

When comparing open source solutions like FFmpeg to commercial options like Content Agent, there are a number of things to consider to determine which solution is right for you. Legal issues, the pace of change, quality of the user interface, and community vs. 1-800 support are big considerations.

  • Pace of Change

    The development and adoption of new codecs like Ogg happen much faster in open source solutions. IMHO, the basic life cycle of developing software products for profit slows the pace at which improvements are made. The process involves investing in the development and then profiting from that investment as long as possible before investing in the development of a new release selling customers on the benefits of paying for an upgrade. Consider how much individuals and organizations have invested in upgrades to Microsoft Word over the last 20 years. Have we seen substantial improvement in functionality for that investment?

    Open source solution are more about solving shared problems since there is no profit to be made selling the updates. First Google and now the Mozilla Foundation have contributed $100,000 to solving the problem of claims of patented math in codecs.

  • User Interface

    This is an area where the commercial option almost always wins and Content Agent is no exception. Media Mover's interface to FFmpeg is functional, but could hardly be considered end user friendly.

    Only local images are allowed.

    FFmpeg is not nearly as nice graphical (other than clients like FFmpegX for OSX, FFmpeg has no GUI), nor is there anyone to call about problems you run into.

  • Support

    A big advantage of a system like Content Agent is that it comes with a 1-800 number for support. The biggest advantage of FFmpeg is that it is free... and open. FFmpeg is a lot like Drupal that way. Both open source solutions have a community of users contributing documentation and volunteering help to new users, but until you actively engage it is difficult to find the information you are looking for or know who to contact about the problem.

    The knowledge base around Drupal has grown substantially within the PEG community since the first stations embraced it. We need to start developing that same type of community around FFmpeg. On the ACM Announce list, Jamie Capach (Executive Director, Pemi-Baker Community Access Media) wrote...

    If you also happen to be a Leightronix user you can find a tutorial and ffmpegX preset for creating files for your NEXUS server in their support site under tutorials in the NEXUS section. It has worked very well for me here.

    I'm really glad Leightronix supports using open source front ends to their playback servers, but that's the type of information PEG stations need to openly sharing with each other. Not only is the ffmpegX tutorial closed, the discussion about the fact that it even exists is closed. If you Google FFmpeg and Leightronix, my post about UPTV is the first thing you'll find. That said, I'm not suggesting David Leighton or Aaron Todd stop working on adding an RSS feed of upcoming shows and to change the permissions on their tutorials. Leightronix has become very supportive of the Open Media Project and all of their customers should thank them for the work they are doing to open their playback server up. We getting very close to have the Broadcast Synchronization module working with a playback server other that Princeton.

    Developing the open, transparent places to share this type of information and developing a community that is capable of supporting itself is a large part of the Open Media Project.

Making a decision to use open solutions like FFmpeg to save your organization money is a good first step, but if that is the only step you take you are just an open source "user" in a negative sense. We need to be openly contributing and sharing information to offset the disadvantages of open source.

Comments

Awesome, detailed post

videohead's picture

Thanks for keeping the issues up front and the discussion open!
I'm not at all concerned about the IP or licensing issues of using MPEG2. I think that's a red herring in PEG - our uses are clearly fair use, IMLO.

Personally, I'm eventually looking to end-run around MPEG2 anyway, and eventually go straight to a scheduler app which will play Flash media or Silverlight/WMV in an AJAX, Silverlight, or Flex interface at 640X480 or 800X600. This would allow for multiple, non-geographically isolated delivery of a specific content base, and would allow people to tune in via the web as well as via traditional broadcast or cablecast. It would allow text and image based messages to be seamlessly integrated with full screen video. It just doesn't make sense to me to archive and store a format which is not able to be capably delivered via broadband, and can't really be played in an HTTP session.

I think there are two primary issues limiting FFMPEG implementation at our station.
1. Implementation of encoding automation. I still haven't been able to automate FFMPEG to the degree that I would like to - I'd love to have something similar to what Pinnacle systems and Cleaner has . . . a drop folder for AVI or QT clips with automated encode (currently just to FLV) applied to anything copied to it. This is clearly due to my own technical limitations - any help, example scripts, or guidelines appreciated. I've had slightly better luck with VLC - I know the interface an CLI a little better I guess. Still not reliable, though.
2. Complexity of the GUI, complexity of no GUI. Simply put, my users can't handle the application (even with detailed documentation). A select few of them know Flash media encoder (Windows GUI), and an even smaller group know Compressor. They like YouTube - simple, simple, simple. Self-service.

Can you point me in the direction of some PEG Vorbis implementations? Are there any?

Compiled FFMpeg for RedHat 5

coderdan's picture

Portland needs to get FFMpeg installed on our development server which is running RedHat 5.2. Anyone know of a pre-compiled package or could help with getting one made? Thanks.

rpm.pbone.net doesn't know

ifitos's picture

rpm.pbone.net doesn't know of an ffmpeg rpm package for RH 5.x, but compiling it from source is straightforward and simple. Simpler than creating an rpm for sure.

FFmpeg on Redhat/CentOS/Fedora

civicpixel's picture

Typically installing FFmpeg on Redhat/Centos/Fedora is as easy as "yum install ffmpeg ffmpeg-devel", but you might need to add the repository. I'm certain on the most recent repository info, but it's probably still the same as the info provided in both of these links:
http://corpocrat.com/2008/05/04/easy-install-ffmpeg-in-linux-servers/
http://www.mysql-apache-php.com/ffmpeg-install.htm

Got FFMpeg Installed

coderdan's picture

Thanks Brian for your repository link. With some minor changes I was able to yum install ffmpeg ffmpeg-devel

correction for the record

akira_kev's picture

Hi Kevin King, the product manager mentioned above I'd just like to clear a few misconceptions.

ContentAgent is far from a 'dumb' encoder, not only is there an XML API there is also a SOAP interface, which we have used to create our Silverlight web interface which has realtime reporting on all encode. http://www.root6technology.com/News/PressReleases/NAB2008_silverlight.html

you posted "To my knowledge there is no legal precedent to support either my opinion that PEG stations don't need pay licensing to use FFmpeg or Root6's opinion that all FFmpeg users have to pay licensing fees." Sorry this wasn't Root6's opinion, What i said during the conversation was that "I questioned if anyone using FFMPEG for commercial gain needs to register with the relevant authority be that MPEG LA, Dolby, Quicklime AAC etc commercial gain could be defined as making money from advertising. I then checked after our meeting and even Free TV has to cough up.

The fees are layed out in this powerpoint are just for MPEG4/AVC http://www.mpegla.com/avc/avcweb.ppt as you can see from slide 9 for free televison each encoder you build you should pay $2500 just for use of the AVC codec in transmission raising to $10000 if you reach more than 1,000,000 house holds I questions if FFMEG was to be used you may be liable for these fees due to the implementations.

for other codecs the fees are here:
Mpeg 2 is here http://www.mpegla.com/m2/m2web_licenseterms.ppt
VC! http://www.mpegla.com/vc1/vc1web.ppt

Thanks for the kind comments about your GUI, and yes it does seem to be that support matters to our customers we've been developing the technologies around ContentAgent for 4 years and in that time you build up a vast knowledge and shared it with our users via our knowledge base. But also the industry changes fast and you need company or people who can see whats on the horizon and developes for the future. From seeing so many different peoples workflows and how people are using digital media thats why I think Root6 Technology is such a forward looking company. Companies have a lot to learn from the open source community some projects like Media Portal http://www.team-mediaportal.com/ it is a product that has been actively worked on by a large number of people and has come a very long way in the last year or so hopefully we have something to learn from the FFMEG community as well.

Regards
Kev*
Root6 Technology
http://www.root6technology.com/products/ContentAgent/demos.html

So the SOAP interface that

kreynen's picture

So the SOAP interface that will give us the status of the file as it passes through the encoding workflow is available now?

On the call, I thought you indicated that this was something that was coming at some point in the future. Even the press release you linked to states...

The implementation of Silverlight for ContentAgent is expected to be included in the next major software release and will be available to existing users.

I thought I was pretty clear about how I defined 'dumb'. The ability to include XML files in a 'drop box' workflow is not the type of system level integration we want to build on. We've done that and it recognize the limitations in that type of configuration. The SOAP interface sounds like it is what we are looking for, but unless there is documentation available now and the SOAP interface is going to be included on the version of the software on the system Austin is getting I can't see how we'll be able to do anything with ContentAgent in March... well, anything other than benchmark ContentAgent against FFmpeg :)

This documentation was

kreynen's picture

This documentation was provided by Anand Jahagirdar from Root 6...

Further to the below 2.4.5.36 has support for initiating a workflow against a specified media file (see jobxml_filesource.xml). The title attribute is optional – if not specified it will use the filename as normal. There’s an example of a metadata group here too, but that again is optional. Just leave out the to bit.

Remember that multiple jobs can be described in a single XML file, and that XML files can be parsed directly from the Tools menu (‘Read CA API xml’) or by dropping them into a watchfolder.

Make sure your workflow includes an unconnected store step if you want the generated linked source clip to be saved into the database.

<?xml version="1.0" encoding="utf-8" ?>
<CONTENTAGENT>
  <CONTENTAGENTJOB>
    <FILESOURCE title="my clip1" filepath="E:\My_Documents\My Videos\PAL orig\ogymatt_pal_orig.mpg">
      <CLIPMETADATA>
        <METADATACOLLECTION>
          <METADATA groupname="Asset Library" copyinc="true">
            <DATA name="Client" displayname="Client" type="TextField" value="Mattel" />
            <DATA name="Asset_ID" displayname="Asset ID" type="TextField" value="56788" />
          </METADATA>
        </METADATACOLLECTION>
      </CLIPMETADATA>
    </FILESOURCE>
    <STOREWORKFLOW workflowname="create an NTSC Mpeg" workflowid="" />
  </CONTENTAGENTJOB>

</CONTENTAGENT>

JobAPI
Examples of xml files for the job API are enclosed. Dropping these into a watchfolder will create jobs on the queue, provided the named Workflows exist in your database. A use of this would be a means of automating the addition of jobs to ContentAgent’s queue from a separate application. For now the sources described will be logged clips so the workflows should begin with a capture step. We’ll extend this to handle a media file source so you can automate transcoding jobs. This is all useful for clients who have loads of info in databases on other systems and want to get it into CA without too much user interaction.

NB sending a reel into a capture workflow will capture all the described clips to a single file/s.

As usual, for now it is assumed all clips exist on the tape currently in the deck. 2.4.5 will contain a feature so that jobs submitted via the API ask the user to confirm that the specified tape is in the machine. If the user does not have the required tape/s to hand they can chose to postpone that job which will cause it to be moved to the bottom of the job queue.

Levergaing "Pre-encoders"

kreynen's picture

So it looks like the only part of the new Content Agent API that will be available in March will be the added option of using an XML file to trigger a workflow (or several workflows) on a specific source video. I can see when this would be helpful because it gives users the option of having the video exist in one place (like a read-only archive) and use the small XML to trigger the workflow that would have normally triggered when the a much larger video was moved into a drop box/watch folder.

We could write something that kicked out these XML trigger/instruction files and writes them to a directory Content Agent is configured to watch. Obviously this would be much better if it had a SOAP interface instead of using files, but the instructions in the XML elements would be the same. The move from XML file triggers to SOAP is what we've been working with Telvue on for the last 2 years. The same information in the PBCore XML file that we used to write to file and move to a "hot folder" with the video we can now insert directly using SOAP (http://PRINCETON-SERVER-IP/program_service/wsdl).

This approach reduces the 'noise' generating and moving thousands of additional XML files generates as well as improving the information we get back after triggering a workflow. With the old XML file method, we'd have to look in the playback server's log to see what the problem is with the import. With the SOAP interface, the playback server tells Drupal what the problem was when I attempt to add a program using the SOAP interface. Without some way to track the progress of the video through Content Agent's workflow, we're left wondering or digging through Content Agent's logs to figure out what happen what happened to each video Drupal asks to be encoded.

Again, I'll use the AJAX analogy. Until we have a way to communicate with Content Agent with a more "web2.0" approach, building on Content Agent is a step backwards from the type of communication we get with FFMPEG. Think about this like trying like trying to build GMail when the only time the browser communicated with the server is when the user clicked Submit.

I think we should move forward using FFMPEG for files processed through the Open Media System and leverage Content Agent to process (or possibly preprocess) files being generated in other parts your station's workflow. I know you are moving toward HD there. I think it makes sense to use Content Agent to preprocess incoming video into the ACM standard MPEG2 before handing them off to the Drupal. Support for that type of preprocessing would work for Portland too where they utilize the Princeton's DVD conversion. Media Mover could skip the step of creating the archival, ACM friendly MPEG2 when the videos are coming from a preprocessing encoder.

Make sense?

FFMPEG and Content Agent

stefanwray's picture

I thought the information on XML was referring to what Silverlight will support later (after NAB) and not what's available in March. We should clarify.

Regarding your last paragraph, I'm wondering how this suggestion would look from the point of view of the end-user, using the Create Show form. Nothing different there?

Specifically, if Content Agent pre-processes files before handing them off to Drupal -- which version of the file appears in the Upload field on the Create Show form? Would a user deliver a digital file, have it handled by Content Agent, then see that file in a folder, then use the Create Show form? What's the work-flow?

Also, I'm wondering if the ACM standard MPEG2 is identical to the Synergy Broadcast MPEG2 standard.

Finally, on FFMPEG, is installing it on Ubuntu OK? Any issues there? Is it best if FFMPEG sits on its own stand server?

Even when Content Agent does

kreynen's picture

Even when Content Agent does support the XML "trigger", it still won't be enough to integrate it with other systems. I think the changes Brian and I have discussed will work really well when using Content Agent, Princeton, ProCoder or any other dedicated encoder.

Hi Stefan -- I'll let Kevin

civicpixel's picture

Hi Stefan -- I'll let Kevin handle the response on the content agent / Silverlight stuff.

For the workflow regarding using Content Agent / Princeton / etc to create the initial MPEG2 file, it is an easy adjustment on the workflow end. We're looking to integrate Princeton's drag&drop DVD encoder into our process in the same way, and the workflow looks like this:

  1. User drags file into Princeton/Content Agent hotfolder
  2. Princeton/Content Agent encode the file as MPEG2, and then spit it out to the 'ingest' folder (folder on the users account that is accesible by the content upload form on the site)
  3. User loads up the website form, instead of selecting a raw file from the ingest folder, they select the MPEG2 which gets immediately attached to the MPEG2 filefield
  4. Media Mover / Open Media System picks it up from here, generating the flash and thumbnails

Community Media

Group organizers

Group notifications

This group offers an RSS feed. Or subscribe to these personalized, sitewide feeds:

Hot content this week