{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "55b31aa4", "metadata": {}, "source": [ "# Tutorial: Auditory Stimuli Inputs using FilterNet\n", "\n", "* Allows users to use audio wav files as stimuli for virtual neurons with filters that detect spectral and temporal modulation\n", "\n", "* Users need to install pycochleagram to run this\n", "https://github.com/mcdermottLab/pycochleagram\n", "https://readthedocs.org/projects/pycochleagram/\n", "\n", "In the first part of the tutorial, we will make a simple auditory filter virtual neuron. For convenience, we will clone it five times to simulate 5 trials for the creation of a peristimulus time histogram.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "741ff9f8-2287-485e-a7cd-1c7b1c2bf4ff", "metadata": { "scrolled": true }, "outputs": [], "source": [ "import numpy as np\n", "from matplotlib import pyplot as plt\n", "from bmtk.builder import NetworkBuilder\n", "import shutil\n", "import os\n", "\n", "# Clear existing outputs and network files if rerunning\n", "if os.path.exists('./sim_aud/output'):\n", " shutil.rmtree('./sim_aud/output')\n", "if os.path.exists('./sim_aud/network'):\n", " shutil.rmtree('./sim_aud/network')\n", " \n", "# Add a single node and plot its properties and response to the sound \n", "\n", "net = NetworkBuilder('aud') # Initialize network called 'aud'\n", "\n", "net.add_nodes(\n", " N = 5,\n", " model_type='virtual',\n", " model_template='audmodel:AUD_filt',\n", " y = 4, # log2(center frequency/50 Hz)\n", " t_mod_freq = 5.0,\n", " sp_mod_freq = 0.0,\n", " delay = 5, # ms\n", " dynamics_params='AUD_filt_partial.json',\n", " plot_filt = True\n", ")\n", "\n", "# Create and save the network\n", "net.build()\n", "net.save_nodes(output_dir='sim_aud/network')\n", "\n", "from bmtk.utils.sim_setup import build_env_filternet\n", "\n", "build_env_filternet(\n", " base_dir='sim_aud', \n", " network_dir='sim_aud/network', \n", " tstop=3000.0, # run the simulation for 3 seconds \n", " include_examples=True) # includes example model files which we'll use in this tutorial\n", " \n", "from bmtk.simulator import filternet\n", "\n", "config = filternet.Config.from_json('sim_aud/config.json')\n", "config.build_env()\n", "net = filternet.FilterNetwork.from_config(config)\n", "sim = filternet.FilterSimulator.from_config(config, net)\n", "sim.run()" ] }, { "cell_type": "code", "execution_count": null, "id": "889c4ece-f385-4647-8d19-e653e6b4beac", "metadata": {}, "outputs": [], "source": [ "from bmtk.analyzer.spike_trains import plot_raster\n", "from scipy import signal\n", "from scipy.io import wavfile\n", "\n", "fig, ax0 = plt.subplots(1, 1, figsize = (6,4.5),sharex=True)\n", "\n", "sample_rate, samples = wavfile.read('sim_aud/audio/sa1.wav')\n", "frequencies, times, spectrogram = signal.spectrogram(samples, sample_rate)\n", "\n", "ax0.pcolormesh(times*1000, frequencies/1000, np.log(spectrogram))\n", "ax0.set_ylabel('Frequency [kHz]')\n", "ax0.set_xlabel('Time [ms]')\n", "ax0.set_xlim((0,3000))\n", "\n", "figr = plot_raster(config_file='sim_aud/config.json', group_by='model_template')\n", "figr.set_figwidth(6)" ] }, { "cell_type": "markdown", "id": "11fdf5b3", "metadata": {}, "source": [ "Let's take a look in the simulation_config.json file, where the stimulus is controlled:\n", "\n", "```json\n", "\"inputs\": {\n", " \"movie\": {\n", " \"input_type\": \"audio\",\n", " \"module\": \"wav_file\",\n", " \"data_file\": \"$BASE_DIR/audio/sa1.wav\",\n", " \"normalize\": \"full\",\n", " \"interp_to_freq\": true,\n", " \"padding\": \"edge\"\n", " }\n", "}\n", "``` \n", "\n", "To change the stimulus to a WAV file of your choice, point to the relative path of the file under \"data_file\". " ] }, { "attachments": {}, "cell_type": "markdown", "id": "b6c384b8", "metadata": {}, "source": [ "The filter carrier consists of a sinusoidal modulation in 2D akin to a plane wave. This carrier is multiplied by a Gaussian envelope in the spectral axis and an asymmetric scaled gamma distribution function in the temporal axis to allow for faster onset of responses and a slower tail decay. \n", "\n", "Filters with very little spectral modulation have a \"vertical\" appearance and respond preferentially to broadband temporal edges such as sound onsets. Filters with very little temporal modulation have a \"horizontal\" appearance and respond preferentially to sustained spectral edges. If the nodes are ordered by their center frequencies, we can construct different \"views\" of the stimulus (speech in this case) through these different types of filters.\n", "\n", "