any updates in the past 2 years?
Asssuming the follwing code:
from BaseHTTPServer import BaseHTTPRequestHandler
from mopidy import get_version
logger = logging.getLogger('mopidy.outputs.http')
server_version = 'HTTPOutput/%s' % get_version()
self.close_connection = 0
def log_message(self, format, *args):
register_fd_with_output() is a function that emits the add signal on the fd sink in a HTTPOutput (
audioconvert ! lame ! multifdsink) and we have a
BaseHTTPServer.HTTPServer running in a
ThreadingActor with the
StreamingHTTPRequestHandler we should have working HTTP streaming.
For additional features like metadata the following links should be useful:
The code above should probably not be used for solving this issue. With the current state of the code base we want something that builds on cherrypy.
First thing that needs to be done is to create and register a GStreamer element we can use in the
OUTPUT setting. Using the mixers, the code in #152 or Pitivi examples should be enough to get started. The element will basically be wrapping a
multifdsink, handling the cherrypy stuff and passing the FDs between them.
As for the HTTP part, there is at least to ways of going about it. First one is probably the easiest, start an independent cherrypy server to the frontend one and use that. The second one, the one I think we want is having a singleton HTTP server shared between the frontend and streaming parts of the code. This part is currently the big unknown as far as I am concerned.
Once a HTTP server is in place we need to detach the socket from cherrypy, looking at ws4py it seems setting
request.rfile.rfile._sock = None is the most likely workaround we can manage. Before we detach we probably want to emit some headers, then we detach and emit the fd to the sink and we should be streaming. Looking at how ws4py takes over sockets should enable us to figure out the details.
Other part the is slightly unknown is if we extend our core audio API to allow for emitting the FDs. That is either something specialized for just that case or something more general that gives access to the output bin we are using.
Why I never thought of this earlier I have no idea, but as for stealing sockets, we should just us
socket.fromfd to create new copy so that the original can be closed with no ill effects.
Are there plans to add output streaming in the near future?
I know this is months old, but this is the only thing holding me back from switching from vanilla MPD. I run MPD on my server, which has no speakers attached, and tune in to the HTTP stream from various computers/devices around the house and elsewhere. I'd LOVE to see this in Mopidy. And if it's already there, please point me in the right direction! Been reading the manuals for a while and can't seem to find a way to get this working.
AFAIK it is possible to get working HTTP streaming from Mopidy by combining it with Icecast and a couple of hacks, documented at https://docs.mopidy.com/en/latest/config/#streaming-through-shoutcast-icecast
Work is ongoing on a gapless branch (see #1288) which together with the next round of gapless improvements should make Mopidy+Icecast work without any hacks to keep the stream alive.
Just want to chime in that I also have serious interest in HTTP Streaming. I want to use it to listen to Spotify on the awesome Moped web interface, from my web server/VPN, at work. Was rather upset when I ran it locally and heard music, then got a tonne of JACK errors when I tried to run it remotely. If I knew how to code better I'd try it myself. Thanks!
@jodal Are the issues you mentioned regarding #1288 resolved?
i.e. Can this part of the documentation be removed?
Currently, Mopidy does not handle end-of-track vs end-of-stream signalling in GStreamer correctly. This causes the SHOUTcast stream to be disconnected at the end of each track, rendering it quite useless. For further details, see #492. You can also try the workaround mentioned below.
@JohnMaguire I don't if there's a section somewhere in the docs we've forgotten since you don't include a link. AFAIK the "develop" version of the docs are up to date with the upcoming changes in 2.0.
It seems it was updated in the develop version of the docs, you're correct: https://docs.mopidy.com/en/develop/audio/#streaming-through-icecast
any updates in the past 2 years?
any updates in the past 5 years?
No. Icecast remains the best solution.
As part of Mopidy 3 we bumped out minimum supported version of GStreamer which opens up some better options for HTTP streaming e.g. hlssink2. If anyone is interested in pursuing this then that might be a good place to start.
Snapcast is another solution. It's made for multiroom playback, but I don't see any reason it can't be used for streaming to a single client too. I haven't tested it, but it looks promising.
Yes, I've used snapcast. The use of a fifo is a bit fiddly and a new alternate method was being developed but last I heard that method came with its own issues. Also note that it is not http streaming.