Working with Red5 is like pulling teeth.
Trying to see if I can spawn an ffmpeg instance after uploading a live video capture.
First step was to see if I could edit/copy any existing application, us to get the recipe. the oflaDemo application is he usual kick off for such things, but it is in fact a very simplistic demo and not exactly what I had in mind. All it really does is list the media files in the streams folder, and report back to the Flash client.
A better example appeared to be Carl Sziebert’s server side stream-recording:
http://sziebert.net/posts/server-side-stream-recording-updated/
He gives us the Java code (and ActionScript, too). It seems to interact with Red5 a lower level than the base ActionScript NetConnection tools. This, I hope, will allow me to follow stream.saveAs(streamName, false);
with something like child = Runtime.getRuntime().exec(command);
.
But first, how to compile and install a new app?
I got the red5 sample code using
svn checkout http://red5.googlecode.com/svn/java/example/trunk/
.
I duplicated the oflaDemo folder inside folder trunk, and renamed it Recorder
I replaced the Application.java and StreamManager.java files in /src/org/red5/demos/oflaDemo with the sziebert code. renamed enclsing folder /src/org/red5/demos/Recorder
set working directory to trunk/Recorder, setenv RED5_HOME to correct path, and invoked ant to compile. Looked in dist folder for Recorder.war, renamed it as Recorder.zip, unzipped, and moved result to Red5/webapps
in webapps.Recorder.WEB-INF, edited red5-web.properties, red5-web.xml, and web.xml to replace references to org/red5/demos/oflaDemo with net/sziebert/tutorials
restarted red5, tried my old Flash client (modified to connect to server address xxx.yyy.zzz.qqq/Recoder ), worked as it did before.
After looking again at my ActionScript Flash client, I see that I already have:
function doCloseRecord() {
// after we have hit "Stop" recording and after the buffered video data has been
// sent to the Flash Media Server close the publishing stream
nsPublish.close();
nsPublish.publish("null");
Wait.text="Done!";
}
So, turns out I know exactly when my stream has finished uploading, and it is safe to launch ffmpeg. I really didn’t need sziebert’s low-level stuff. I just needed to rewrite Application.java to include a method which I could name ‘launchexternal’ which execs a shell command. Then, my ActionScript code becomes
function Fault(f:Event ){
trace("There was a problem: " + f.toString());
}function Result(result:Object) {
trace("result from launchexternal "+ result);
};
var res = new Responder(Result,Fault);function doCloseRecord() {
// after we have hit "Stop" recording and after the buffered video data has been
// sent to the Flash Media Server close the publishing stream
nsPublish.close();
nsPublish.publish("null");
Wait.text="Done!";
nc.call("launchexternal",res);
}
Fault and Result are functions that return results from the Responder object, The responder object gets passed to the nc.call method, and the results of launchexternal are returned
At first blush, then, my new Application java starts to look like this
public string launchexternal() {
log.info("wrecorder launchexternal");
System.out.println("wrecorder launchexternal");
try {
String[] commands = new String[]{"ls", "-lt", ".."};
Process child = Runtime.getRuntime().exec(commands);
InputStream in = child.getInputStream();
int c;
while ((c = in.read()) != -1) {
System.out.println((char)c);
}
in.close();
} catch (IOException e) {
}
return true;
}
By gosh, it worked, and the results of the ls command were spewed across the console from which I launched red5
But was it an asynchronous background process? Further research into Runtime.getRuntime().exec suggested it was depreciated in favor of ProcessBuilder.