程序员时如何制作彩色音乐

当我看完一部电影时当电影的主要人物之一巧妙地将她的腿和胳膊抬高到旋律的节奏时,这一刻就刻在了我的记忆中因此:不仅英雄是值得关注的,而且他背后的监视器也很值得关注。


最近,和其他许多地方一样,花在四堵墙上的时间大大增加了,这个想法浮出水面:“实现这样一个场景的最奇怪的方法是什么?”


展望未来,我会说选择取决于使用WebRTC和WebAudio API这样的Web浏览器。


因此,我想到了,然后马上选择了从简单的选项(拾起收音机并找到具有这种可视化效果的播放器)到长的选项(制作一个客户端服务器应用程序,该应用程序将通过套接字发送有关颜色的信息)。然后,我陷入了沉思:“现在浏览器已经具备了实现所需的所有组件”,因此我将尝试使用它。


WebRTC是一种将数据从浏览器传输到浏览器(对等)的方法,这意味着我一开始就不必制造服务器。但是他有点不对劲。为了建立RTCPeerConnection,您需要两个服务器:signal和ICE。第二,您可以使用现成的解决方案(许多Linux发行版的存储库中都有STUN或TURN服务器)。首先需要完成一些工作。


该文档说,任意的双向交互协议可以充当信号,然后立即使用WebSockets,长池化或执行其他操作。在我看来,最简单的选择是从某些图书馆的文献中打招呼。这是一个简单的信号服务器:


import os
from aiohttp import web, WSMsgType

routes = web.RouteTableDef()
routes.static("/def", os.path.join(os.path.dirname(os.path.abspath(__file__)), 'static') )

@routes.post('/broadcast')
async def word(request):
    for conn in list(ws_client_connections):
        data = await request.text()
        await conn.send_str(data)
    return web.Response(text="Hello, world")

ws_client_connections = set()

async def websocket_handler(request):
    ws = web.WebSocketResponse(autoping=True)
    await ws.prepare(request)

    ws_client_connections.add(ws) 
    async for msg in ws:
        if msg.type == WSMsgType.TEXT:
            if msg.data == 'close':
                await ws.close()
                # del ws_client_connections[user_id]
            else:
                continue
        elif msg.type == WSMsgType.ERROR:
            print('conn lost')
            # del ws_client_connections[user_id]
    return ws

if __name__ == '__main__':

    app = web.Application()
    app.add_routes(routes)
    app.router.add_get('/ws', websocket_handler)
    web.run_app(app)

我什至没有实现客户端WebSockets消息的处理,只是做了一个POST端点,该端点将消息发送给每个人。从文档复制的方法是我喜欢的。


此外,要在浏览器之间建立WebRTC连接,会发生简单的嗨,您可以-而且我可以。该图非常清晰可见:



(图表摘自第页


首先,您需要创建连接本身:


function openConnection() {
   const servers = { iceServers: [
       {
       urls: [`turn:${window.location.hostname}`],
       username: 'rtc',
       credential: 'demo'
     }, 
     {
       urls: [`stun:${window.location.hostname}`]
     }   
  ]};
  let localConnection = new RTCPeerConnection(servers);
  console.log('Created local peer connection object localConnection');
  dataChannelSend.placeholder = '';
  localConnection.ondatachannel = receiveChannelCallback;

  localConnection.ontrack = e => {
    consumeRemoteStream(localConnection, e);
  }
  let sendChannel = localConnection.createDataChannel('sendDataChannel');
  console.log('Created send data channel');

  sendChannel.onopen = onSendChannelStateChange;
  sendChannel.onclose = onSendChannelStateChange;

  return localConnection;
}

, , . createDataChannel() , , addTrack() .


function createConnection() {
  if(!iAmHost){
    alert('became host')
    return 0;
  }
  for (let listener of streamListeners){
    streamListenersConnections[listener] = openConnection()
    streamListenersConnections[listener].onicecandidate = e => {
      onIceCandidate(streamListenersConnections[listener], e, listener);
    };
    audioStreamDestination.stream.getTracks().forEach(track => {
      streamListenersConnections[listener].addTrack(track.clone())
    })
    // localConnection.getStats().then(it => console.log(it))
    streamListenersConnections[listener].createOffer().then((offer) =>
      gotDescription1(offer, listener),
      onCreateSessionDescriptionError
  ).then( () => {
    startButton.disabled = true;
    closeButton.disabled = false;

  });
  }

}

WebAudio API. , , , .


function createAudioBufferFromFile(){
  let fileToProcess = new FileReader();
  fileToProcess.readAsArrayBuffer(selectedFile.files[0]);
  fileToProcess.onload = () => {

    let audioCont = new AudioContext();
    audioCont.decodeAudioData(fileToProcess.result).then(res => {

    //TODO: stream to webrtcnode  
    let source = audioCont.createBufferSource()
    audioSource = source;
    source.buffer = res;
    let dest = audioCont.createMediaStreamDestination()
    source.connect(dest)
    audioStreamDestination =dest;

    source.loop = false;
    // source.start(0)
    iAmHost = true;
    });
  }
}

, , mp3 . AudioContext, MediaStream. , , addTrack() WebRTC .


, createOffer() . , , :


function acceptDescription2(desc, broadcaster) {
  return localConnection.setRemoteDescription(desc)
  .then( () => { 
    return localConnection.createAnswer();
  })  
  .then(answ => {
  return localConnection.setLocalDescription(answ);
  }).then(() => {
        postData(JSON.stringify({type: "accept", user:username.value, to:broadcaster, descr:localConnection.currentLocalDescription}));

  })
}

IceCandiates:


function finalizeCandidate(val, listener) {
  console.log('accepting connection')
  const a = new RTCSessionDescription(val);
  streamListenersConnections[listener].setRemoteDescription(a).then(() => {
    dataChannelSend.disabled = false;
    dataChannelSend.focus();
    sendButton.disabled = false;

    processIceCandiates(listener)

  });
}

:


        let conn = localConnection? localConnection: streamListenersConnections[data.user]
        conn.addIceCandidate(data.candidate).then( onAddIceCandidateSuccess, onAddIceCandidateError);
        processIceCandiates(data.user)

addIceCandidate() .


, .
/( ).


function consumeRemoteStream(localConnection, event) {
  console.log('consume remote stream')

  const styles = {
    position: 'absolute',
    height: '100%',
    width: '100%',
    top: '0px',
    left: '0px',
    'z-index': 1000
  };
Object.keys(styles).map(i => {
  canvas.style[i] = styles[i];
})
  let audioCont = new AudioContext();
  audioContext = audioCont;
  let stream = new MediaStream();
  stream.addTrack(event.track)
  let source = audioCont.createMediaStreamSource(stream)
  let analyser = audioCont.createAnalyser();
  analyser.smoothingTimeConstant = 0;
  analyser.fftSize = 2048;
  source.connect(analyser);
  audioAnalizer = analyser;
  audioSource = source;
  analyser.connect(audioCont.destination)
  render()
}

, . AudioContext , AudioNode.
, createAnalyser() .


:


function render(){
  var freq = new Uint8Array(audioAnalizer.frequencyBinCount);
  audioAnalizer.getByteFrequencyData(freq);
  let band = freq.slice(
    Math.floor(freqFrom.value/audioContext.sampleRate*2*freq.length),
    Math.floor(freqTo.value/audioContext.sampleRate*2*freq.length));
  let avg  = band.reduce((a,b) => a+b,0)/band.length;

  context.fillStyle = `rgb(
        ${Math.floor(255 - avg)},
        0,
        ${Math.floor(avg)})`;
  context.fillRect(0,0,200,200);
  requestAnimationFrame(render.bind(gl));
}

从整个频谱中,将突出显示一个频带,该频带将由单个浏览器表示,并且取平均后即可获得振幅。接下来,根据相同的幅度,使颜色介于红色和蓝色之间。画画布。


像这样的东西看起来像是利用了想象力的结果:



All Articles