Skip to content

Instantly share code, notes, and snippets.

@pfaion
Last active October 28, 2019 15:09
Show Gist options
  • Save pfaion/3be682062c7a206c140a55e23b71990e to your computer and use it in GitHub Desktop.
Save pfaion/3be682062c7a206c140a55e23b71990e to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Visualize Pupil Positions in Eye Video\n",
"\n",
"This is a quick demo for how to load a frames from an exported eye video and match it with the exported pupil positions.\n",
"\n",
"## Note 1: Matching Video Frame Indices to Pupil Positions\n",
"\n",
"Generally you should always match frame-indices via the timestamps file, which is guaranteed to have the correct pupil-timestamp for every frame. With the timestamp you can then search in pupil_positions for the corresponding datum. See example below on how to efficiently use pandas indices for this.\n",
"\n",
"## Note 2: OpenCV for Streaming Video\n",
"\n",
"Setting the frame-pos `cv2.CAP_PROP_POS_FRAMES` with OpenCV does not work as intended unfortunately. Seeking in the VideoCapture through this property will not give you the same frames as when just reading frame-by-frame, so it will result in incorrect frames. We are not sure why this happens, it might be a bug in OpenCV.\n",
"\n",
"**So when using pure OpenCV you can only accurately forward iterate through the video!**\n",
"\n",
"If you need more seeking capabilities (e.g. seeking backwards or to arbitrary frames) you will have to use some more advanced library, e.g. [PyAV](https://github.com/mikeboers/PyAV).\n",
"\n",
"## Example\n",
"Run the following code for an example demo. A windows will open, displaying the exported video frame-by-frame (with red ellipse) with an additional blue ellipse drawn from the matched exported pupil positions.\n",
"\n",
"Press **q** to quit, any other key to advance by 1 frame."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import cv2\n",
"import pandas as pd\n",
"\n",
"# Load timestamps and pupil positions for eye0, pupil positions are indexed by pupil_timestamp.\n",
"timestamps = pd.read_csv(\"eye0_timestamps.csv\")\n",
"pupil_positions = pd.read_csv(\"pupil_positions.csv\", index_col=\"pupil_timestamp\")\n",
"pupil_positions = pupil_positions[pupil_positions[\"eye_id\"] == 0]\n",
"\n",
"# Load video\n",
"video = cv2.VideoCapture(\"eye0.mp4\")\n",
"n_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT))\n",
"\n",
"frame_idx = 0\n",
"while frame_idx < n_frames:\n",
" \n",
" # Read frame. Note: This will read frame-by-frame accurately!\n",
" # Do not try to call video.set(cv2.CAP_PROP_POS_FRAMES, frame_idx)!\n",
" success, frame = video.read()\n",
" if success:\n",
" \n",
" # Get corresponding pupil_time for frame_idx\n",
" pupil_time = timestamps.iloc[frame_idx]\n",
" # Get corresponding pupil positions for pupil_time\n",
" pupil_data = pupil_positions.loc[pupil_time]\n",
" \n",
" # Render ellipse from pupil positions data\n",
" cv2.ellipse(\n",
" img=frame,\n",
" center=(int(pupil_data[\"ellipse_center_x\"]), int(pupil_data[\"ellipse_center_y\"])),\n",
" axes=(pupil_data[\"ellipse_axis_a\"]/2, pupil_data[\"ellipse_axis_b\"]/2),\n",
" angle=pupil_data[\"ellipse_angle\"],\n",
" startAngle=0,\n",
" endAngle=360,\n",
" color=(255, 0, 0),\n",
" thickness=1\n",
" )\n",
" \n",
" # Flip and enlarge for better visibility\n",
" frame = cv2.flip(frame, -1)\n",
" frame = cv2.resize(frame, (400, 400))\n",
" \n",
" cv2.imshow(\"Video\", frame)\n",
" \n",
" key = cv2.waitKey()\n",
" if key == ord('q'):\n",
" break\n",
" frame_idx += 1\n",
" \n",
"video.release()\n",
"cv2.destroyAllWindows()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment