아이 트래킹 오픈 소스 | 아이 트래킹이란 무엇이며 어떻게 작동 하는가 | Cooltool의 데모 비디오 인기 답변 업데이트

당신은 주제를 찾고 있습니까 “아이 트래킹 오픈 소스 – 아이 트래킹이란 무엇이며 어떻게 작동 하는가 | CoolTool의 데모 비디오“? 다음 카테고리의 웹사이트 https://you.charoenmotorcycles.com 에서 귀하의 모든 질문에 답변해 드립니다: https://you.charoenmotorcycles.com/blog. 바로 아래에서 답을 찾을 수 있습니다. 작성자 CoolTool 이(가) 작성한 기사에는 조회수 13,652회 및 좋아요 92개 개의 좋아요가 있습니다.

Table of Contents

아이 트래킹 오픈 소스 주제에 대한 동영상 보기

여기에서 이 주제에 대한 비디오를 시청하십시오. 주의 깊게 살펴보고 읽고 있는 내용에 대한 피드백을 제공하세요!

d여기에서 아이 트래킹이란 무엇이며 어떻게 작동 하는가 | CoolTool의 데모 비디오 – 아이 트래킹 오픈 소스 주제에 대한 세부정보를 참조하세요

#EyeTracking is a technology that allows you to understand what a person is really looking at while watching advertising, viewing design layouts, browsing a website, etc. It is the most objective method to measure consumers’ attention.
At CoolTool.com, eye-tracking technology (as well as other neuromarketing technologies) is fully integrated with our surveys engine. It allows you to cross-analyze consumers’ explicit responses with their nonconscious reactions and get the most reliable insights. It is all fully automated.
You can run eye-tracking studies using a special device (infrared eye tracker) or via a standard webcam. The accuracy of the webcam-based eye tracking results depends on the webcam resolution.
#Neuromarketing #CX #Insights #Marketing

아이 트래킹 오픈 소스 주제에 대한 자세한 내용은 여기를 참조하세요.

PyGaze | Open source eye-tracking software and more.

This is the homepage to PyGaze, an open-source toolbox for eye tracking in Python. It also features related projects, such as PyGaze …

+ 여기를 클릭

Source: www.pygaze.org

Date Published: 9/20/2021

View: 2251

10 Free Eye Tracking Software Programs [Pros and Cons]

openEyes. This open source software allows eye tracking from both infrared and visible spectrum illumination, using Matlab. positive Pros. + Can …

+ 여기를 클릭

Source: imotions.com

Date Published: 7/17/2021

View: 2614

웹캠 아이트래킹 – eye square

하지만 오늘날 아이스퀘어는 각자의 집에서 개인의 컴퓨터 앞에 앉아 있는 사람들과. 온라인을 통해 눈의 움직임을 측정할 수 있습니다. 사용자가 일시적인 접근을 …

+ 여기에 더 보기

Source: www.eye-square.com

Date Published: 1/2/2021

View: 1019

Pupil-Labs – Open source eye tracking – GitHub

Pupil is a project in active, community driven development. Pupil Core mobile eye tracking hardware is accessible, hackable, and affordable. The software is …

+ 더 읽기

Source: github.com

Date Published: 8/13/2022

View: 5551

Cheap, Open Source Eye Tracking You Can Build Yourself

Controlling computers with one’s eyes is a staple of science fiction that makes a lot more sense than Minority Report- hand-waving…

+ 더 읽기

Source: www.hackster.io

Date Published: 4/30/2021

View: 4832

EyeLoop: An Open-Source System for High-Speed, Closed …

EyeLoop enables low-resource facilities access to eye-tracking and encourages community-based development of software code through a modular, tractable …

+ 더 읽기

Source: www.frontiersin.org

Date Published: 9/26/2021

View: 9756

Pupil Core – Open source eye tracking platform – Pupil Labs

Pupil Core is an eye tracking platform that is comprised of an open source software suite and a wearable eye tracking headset. Pupil Core is more than just a …

+ 더 읽기

Source: pupil-labs.com

Date Published: 9/20/2021

View: 1999

온라인 시선 추적 | 웹캠 시선 추적 소프트웨어 – GazeRecorder

Webcam Eye-Tracker: Cloud eye tracking insights platform for remote … 웹캠을 입력 소스로 사용하는이 편리한 응용 프로그램을 사용하여 눈의 움직임을 추적하고 …

+ 여기에 보기

Source: redirect.gazerecorder.com

Date Published: 3/24/2022

View: 3525

아이트래킹 및 음성인식 기술을 활용한 지체장애인 컴퓨터 접근 …

제안하는 시스템은 아이트래킹 기술을 이용하여 사용자의 시선만으로 마우스를 이동시킬 수 있 … 사용할 수 있는 STT오픈소스 중 가장 접근성이 높은 Google.

+ 여기에 자세히 보기

Source: www.koreascience.kr

Date Published: 12/11/2021

View: 5413

주제와 관련된 이미지 아이 트래킹 오픈 소스

주제와 관련된 더 많은 사진을 참조하십시오 아이 트래킹이란 무엇이며 어떻게 작동 하는가 | CoolTool의 데모 비디오. 댓글에서 더 많은 관련 이미지를 보거나 필요한 경우 더 많은 관련 기사를 볼 수 있습니다.

아이 트래킹이란 무엇이며 어떻게 작동 하는가 | CoolTool의 데모 비디오
아이 트래킹이란 무엇이며 어떻게 작동 하는가 | CoolTool의 데모 비디오

주제에 대한 기사 평가 아이 트래킹 오픈 소스

  • Author: CoolTool
  • Views: 조회수 13,652회
  • Likes: 좋아요 92개
  • Date Published: 최초 공개: 2019. 8. 7.
  • Video Url link: https://www.youtube.com/watch?v=_vZF60ujx0U

Open source eye-tracking software and more.

Testing children is less easy than testing adults, primarily because they lack the social inhibition to tell psychological researchers to go away with their super boring tests. This presents a problem in developmental research: How do you reach these kids?! We developed a bunch of iPad games to test the cognition of an entire classroom in one go. And it works!

10 Free Eye Tracking Software Programs [Pros and Cons]

Researchers have attempted to track the eye movements for well over a century. By knowing what is being looked at, it’s possible to understand what is driving visual attention.

This has been important for psychologists and other human behavior researchers, and has become increasingly used by people working in other related fields, such as neuromarketing.

As time has passed, the technology and software have improved as well. This has formed the eye tracking field of today, in which incisive discoveries are more accessible than ever through the use of advanced eye tracking software.

There are various options available for eye tracking, and some of these are offered free-of-charge. While this can be a great benefit for many users, this advantage is dependent on the software working well – if it doesn’t function as hoped, or offer the capabilities required, then the price doesn’t matter. Below, we outline the pros and cons of free eye tracking software and discuss the benefits of using a more advanced platform like iMotions for your research.

Our Free Eye Tracking Software Top 10 List

While this list features 10 eye tracking software programs, we also (quite literally) looked at several others. Those that have been left off the list include deprecated or non-functional programs, as well as those that are not truly free.

Below we have listed 10 eye tracking software programs, showing whether or not certain functions exist, and the accessibility of them.

The list above captures the essentials of the free eye tracking software that we have elucidated further below.

xLabs

Built as a browser extension for Google Chrome, this startup is the result of 2 years R&D by the four co-founders. You can install the software directly into your browser here. xLabs has also led to a spinoff, EyesDecide that provides testing of stimuli through their webcam-based eye tracking.

Pros

+ Very easy to install

+ Easy to use

+ Works in multiple platforms

Cons

– Only works with webcams (decreased accuracy)

– Doesn’t allow integrated stimulus presentation

– No simple way to obtain the data

– No data analysis options

– No support

GazePointer

The GazePointer program is straightforward to install, and runs on Windows, making it one of the more widely accessible programs on this list.

Pros

+ Easy to install

Cons

– Only works with webcams (decreased accuracy)

– Doesn’t allow integrated stimulus presentation

– No support

– No data analysis options

MyEye

MyEye has been designed for use by people with amyotrophic lateral sclerosis (ALS), a neuromuscular disease. It is in the Beta phase of development.

Pros

+ Easy to install

Cons

– No support or even documentation

– Doesn’t allow integrated stimulus presentation

– No data analysis options

– No simple way to obtain the data

Ogama

Ogama is open source software developed at the Free University of Berlin.

Pros

+ Allows basic stimulus presentation

+ Provides basic data analysis options

Cons

– No support

– No updates in over three years

openEyes

This open source software allows eye tracking from both infrared and visible spectrum illumination, using Matlab.

Pros

+ Can be used with webcams and infrared eye trackers

Cons

– Requires Matlab (commercial software) and working knowledge of Matlab

– Doesn’t allow integrated stimulus presentation

– No support

– No data analysis options

PyGaze

This software runs in Python, and was published by three researchers (from Oxford University, Aix-Marseille University, and Utrecht University) in 2014.

Pros

+ Stimulus presentation (requires knowledge of Python)

+ Data analysis (requires knowledge of Python)

Cons

– Requires a good working knowledge of Python

– Not much support

OpenGazer

OpenGazer was designed 8 years ago to increase the accessibility of computer use, and was originally supported by Samsung and the Gatsby Charitable Foundation.

Pros

+ Potentially compatible with Apple OS (although requires programming knowledge)

Cons

– Only works with webcams (decreased accuracy)

– Requires Linux (and knowledge of how to use Linux)

– No support

TurkerGaze

TurkerGaze is a software program developed by researchers at Princeton. The system runs in Linux and is dependent on several other Linux programs to function.

Pros

+ Provides basic data analysis options

Cons

– Best used with a head restraint

– Requires Linux (and knowledge of how to use Linux)

GazeParser / Simple Gaze Tracker

This software consists of two components: GazeParser (for stimulus presentation, data conversion, and analysis), and SimpleGazeTracker is (used for gaze recording) with the use of Python.

Pros

+ Can perform stimulus presentation and data analysis (although requires knowledge of Python)

+ Provides basic data analysis options (with use of Python coding)

Cons

– Requires a motion capture or machine vision camera

– Requires a chinstrap / head restraint to restrict head movement

– Requires a good working knowledge of Python

ITU Gaze Tracker

Originally developed by the Gaze Group at the University of Copenhagen, ITU Gaze Tracker is an open source platform designed to increase the accessibility of technology.

Pros

+ Easy to install

Cons

– Requires building own infrared eye tracker (if not using webcam)

– No support

The Verdict

While the free eye tracking software can be a fun experiment – and the feats they pull off are sometimes still impressive (particularly when constructed by small teams) – none are able to meet the requirements of eye tracking for work and research. Making discoveries, and uncovering results, requires that eye tracking is accurate, both in space and time.

It is due to these drawbacks that eye tracking software requires a certain amount of credibility – getting past peer-review to publish research will be an even more difficult process if the software itself isn’t widely recognized as leading to legitimate results.

A screenshot of GazePointer.

There is – as always – the potential for things to go wrong, and if you encounter any problems while figuring the software out, it’s unlikely that you will receive much support. As many of the programs have been created by individuals or small groups, chances are that they won’t have the time to help users or fix inevitable software bugs.

There are even more things to consider – are there any stimulus presentation options? Can it be integrated with other software, such as PsychoPy, E-Prime, or even in-browser presentation? How is the fixation algorithm constructed? Is the data analyzable, or even accessible?

A screenshot of xLabs.

This could also mean understanding the source code to provide explanations about its use. For a programmer this could be relatively straightforward, but will likely involve a level of expertise well above the average for most people working with eye tracking.

For most of the software, the features are rarely fitted to the ideal situation.

Webcam-based Eye Tracking Software

Many of the free eye tracking softwares above is that they only incorporate data from webcams and provide limited data analysis capabilities. This of course promises more accessibility to the user, yet will not be able to deliver the same level of accuracy as infrared eye trackers. This limitation might be acceptable for some researchers, if for example they are interested in recruiting many respondents across global locations or need to distribute studies remotely due to physical limitations, travel restrictions, etc.

A screenshot of the ITU Gaze Tracker.

Choosing which software to use also comes down to trust. If you’re planning on using the software for research, then you need to be able to defend its use to the research community – this isn’t difficult with companies that have already built trust through years of work and communication, but will unavoidably be more difficult with small startup operations.

When Free Options No Longer Cut It

iMotions offers two valuable solutions for eye tracking: our desktop application and Online Data Collection. The desktop solution allows you to synchronize and analyze data from infrared eye trackers, and the Online Data Collection system allows for webcam-based eye tracking collected remotely. Both come with the possibility for robust visualizations, annotations, and metrics exports that are integral to high-quality eye tracking research.

Carrying out research and work with eye tracking in iMotions is both simple and comprehensive. A whole range of features are readily available, allowing you to carry out advanced research in a plug-and-play solution. Some of the features are listed below.

List of features and metrics Individual & aggregate gaze replays

Automated AOI generation allows for tracking of an area throughout a video

Automated metrics such as Time to First Fixation (TTFF), time spent, ratio, revisits, fixation count, mouse clicks, keystrokes etc.

Real-time recording

Static & dynamic heatmaps

Create live and post markers

Raw data including X,Y coordinates of eye position, pupil size, & distance to the screen

Well validated in hundreds of publications

Continuous support

Seamless synchronization with other biosensors, such as facial expression analysis, EEG, GSR, ECG, EMG, and more Integration with 20+ eye tracking models from a range of vendors such as SMI, EyeTech, Eye Tribe, GazePoint, etc – including screen-based, eye tracking glasses, and webcams

Simple installation and setup

Intuitive user interface

Presentation of screen-based multimedia stimuli (images, videos, websites, games, software interfaces and 3D environments)

Compatible with mobile devices / external interfaces

Real-world recording with glasses or remote eye trackers

Ability to expand use with API

Integrated study setup and design

Integrated data quality assurance tools

Static & dynamic areas of interest (AOIs), manual and semi-automated options

Expanding human behavior research methods to include other sensors will mean more data, and ultimately more incisive findings. iMotions is not only the research platform that is designed (and continually updated) for this purpose – it also includes an easy-to-use setup with a vast array of accurate eye tracking features.

While the price tag – or rather lack of price tag – can be an attractive aspect, if the tracking is slow or inaccurate then little can be done. Accuracy and reliability are necessary when it comes to creating a deeper understanding of human behavior – and that’s difficult to put a price on.

I hope you’ve enjoyed reading about the advantages and disadvantages of free eye tracking software. To learn more about how iMotions can help you carry out flawless eye tracking experiments, and to see which advanced features are available, get in touch and schedule a demo.

Note: this article was originally published in 2019 and has been updated in 2021 to include the new iMotions Online Data Collection module.

eye square

3일 이내에 1000명의 관심도를 측정한다? 그들 중 절반이 독일 전역의 시골 지역에 거주중이다?

게다가 합리적인 비용으로? 몇 년 전만 해도 이것은 사람들의 일반적인 생각에서는 불가능한

일이었습니다. 하지만 오늘날 아이스퀘어는 각자의 집에서 개인의 컴퓨터 앞에 앉아 있는 사람들과

온라인을 통해 눈의 움직임을 측정할 수 있습니다. 사용자가 일시적인 접근을 허용하면, 각각의

컴퓨터에 장착된 웹캠을 통해 눈의 움직임을 측정합니다. 이것은 엄청난 시간과 비용을 절약할 수

있는 혁신적인 방법입니다.

pupil-labs/pupil: Open source eye tracking

Pupil

Open source eye tracking platform.

Pupil is a project in active, community driven development. Pupil Core mobile eye tracking hardware is accessible, hackable, and affordable. The software is open source and written in Python and C++ when speed is an issue.

Our vision is to create tools for a diverse group of people interested in learning about eye tracking and conducting their eye tracking projects.

Chat with us on Discord.

Users

You don’t need to know how to write code to use Pupil. Download the latest apps!

Read the Pupil Core user guide.

Developers

There are a number of ways you can interact with Pupil Core software as a developer:

Use the API: Use the network based real-time API to communicate with Pupil over the network and integrate with your application.

Develop a Plugin: Plugins are loaded at runtime from the app bundle. Note: if your plugin requires Python libraries that are not included in the application bundle, then you will need to run from source.

Run from Source: Can’t do what you need to do with the network based api or plugin? Then get ready to dive into the inner workings of Pupil and run from source!

All setup and dependency installation instructions are contained in this repo. All other developer documentation is here.

Installing Dependencies

Clone the repo

After you have installed all dependencies, clone this repo and start Pupil software.

git clone https://github.com/pupil-labs/pupil.git # or your fork cd pupil

Note: If you are using Windows, you will have to complete a few more steps after cloning the repo. Please refer to the Windows 10 dependencies setup guide.

Run Pupil

cd pupil_src python main.py capture # or player/service

Command Line Arguments

The following arguments are supported:

Flag Description -h, –help Show help message and exit. –version Show version and exit. –debug Display debug log messages. –profile Profile the app’s CPU time. -P PORT, –port PORT (Capture/Service) Port for Pupil Remote. –hide-ui (Capture/Service) Hide UI on startup. (Player) Path to recording.

License

All source code written by Pupil Labs is open for use in compliance with the GNU Lesser General Public License (LGPL v3.0). We want you to change and improve the code — make a fork! Make sure to share your work with the community!

Cheap, Open Source Eye Tracking You Can Build Yourself

Controlling computers with one’s eyes is a staple of science fiction that makes a lot more sense than Minority Report-style hand-waving. Nobody wants to swing their arms all over the place just to open the future equivalent of Facebook — it’s horribly inefficient. Simply looking at the Futureface icon with your eyes, however, is very efficient. Moving your eyes is quick and expends almost no energy. Practical eye tracking is still in its infancy, and can often be quite expensive. But, John Evans’ open source eye tracker is affordable and seems to work well.

The goal of every eye tracking system is to simply determine what it is you’re looking at. It needs to be accurate enough to work with a traditional computing workspace, and quick enough to keep up with the fast movement of your eyes. Evan’s eye tracker achieves that, but the hardware is fairly complex. That said, the system is still very affordable, and all of the code is available so that you can build one yourself.

The primary components of the system are a pair of commercial $20 webcams, and a pair of infrared LED beacons. The beacons are stationary, and should be placed around whatever you’re looking at — probably your computer screen. One of webcams points at your eyes, and uses the infrared reflections from the beacons to determine a “looking vector.” The second camera points towards what you’re looking at, and uses that looking vector to calculate where your gaze is actually falling. If you want to give it a try, Evans’ has developed a camera viewer program that will handle the math so you can get started.

EyeLoop: An Open-Source System for High-Speed, Closed-Loop Eye-Tracking

Eye-trackers are widely used to study nervous system dynamics and neuropathology. Despite this broad utility, eye-tracking remains expensive, hardware-intensive, and proprietary, limiting its use to high-resource facilities. It also does not easily allow for real-time analysis and closed-loop design to link eye movements to neural activity. To address these issues, we developed an open-source eye-tracker – EyeLoop – that uses a highly efficient vectorized pupil detection method to provide uninterrupted tracking and fast online analysis with high accuracy on par with popular eye tracking modules, such as DeepLabCut. This Python-based software easily integrates custom functions using code modules, tracks a multitude of eyes, including in rodents, humans, and non-human primates, and operates at more than 1,000 frames per second on consumer-grade hardware. In this paper, we demonstrate EyeLoop’s utility in an open-loop experiment and in biomedical disease identification, two common applications of eye-tracking. With a remarkably low cost and minimum setup steps, EyeLoop makes high-speed eye-tracking widely accessible.

Introduction

At every moment, the brain uses its senses to produce increasingly complex features that describe its external world (Ehinger et al., 2015; Zeki, 2015). Our everyday behaviors, such as navigating in traffic, are directed in large part by our sensory input, that is, what we see, hear, feel, etc. (Lee, 1980). The eyes, in particular, engage in sensory facilitation; For example, the optomotor response of insects detects perturbations of visual flow to avoid collisions (Theobald et al., 2010) and elicits stabilizing head movements in mice (Kretschmer et al., 2017). Similarly, combined eye-head movements in free-roaming mice were recently shown to re-align the visual axis to the ground plane, suggesting that vision itself is subject to sensory modulation (Meyer et al., 2020). Tracking the state of the eyes is thus often integral to nervous systems research.

Eye-tracking is widely used in neuroscience, from studying brain dynamics to investigating neuropathology and disease models (Yonehara et al., 2016; Wang et al., 2018; Meyer et al., 2020). Despite this broad utility, commercial eye-tracking systems, such as ISCAN (de Jeu and De Zeeuw, 2012; Yaramothu et al., 2018), remain expensive, hardware-intensive, and proprietary, constraining use to high-resource facilities. Likewise, deep learning-based approaches, such as DeepLabCut (Nath et al., 2019), still require specialized processing units and, for the most part, are limited to offline tracking. More generally, current systems tend to be programmatically rigid, e.g., by being compiled into executable, proprietary software unavailable for modifications, or coded in a more complex syntax and system architecture with advanced software modules. To address these issues, we developed an open-source eye-tracker – EyeLoop – tailored to investigating visual dynamics at very high speeds. EyeLoop enables low-resource facilities access to eye-tracking and encourages community-based development of software code through a modular, tractable algorithm based on high-level Python 3.

Materials and Methods

Ethics Statement

All experiments on mice were performed according to standard ethical guidelines and were approved by the Danish National Animal Experiment Committee (2020-15-0201-00452). No experiment on non-human primate was conducted in this study. The video footage of human eyes was provided by a human volunteer. The video footage of marmoset eyes was provided by Jude Mitchell (University of Rochester).

Experimental Animals

Wild-type control mice (C57BL/6J) were obtained from Janvier Labs. Frmd7™ mice are homozygous female or hemizygous male Frmd7tm1b(KOMP)Wtsi mice, which were obtained as Frmd7tm1a(KOMP)Wtsi from the Knockout Mouse Project (KOMP) Repository, Exon 4 and neo cassette flanked by loxP sequences were removed by crossing with female Cre-deleter Edil3Tg(Sox2–cre)1Amc/J mice (Jackson laboratory stock 4,783) as confirmed by PCR of genome DNA and maintained in a C57BL/6J background. Experiments were performed on 3 male and female wild-type control mice, and 3 male and female Frmd7™ mice. All mice were between 2 and 4 months old. Mice were group-housed and maintained in a 12 h/12 h light/dark cycle with ad libitum access to food and water.

Head-Plate Implantation

Surgeries and preparation of animals for experiments were performed as previously described (Rasmussen et al., 2020). Mice were anesthetized with an intraperitoneal injection of fentanyl (0.05 mg/kg body weight; Hameln), midazolam (5.0 mg/kg body weight; Hameln), and medetomidine (0.5 mg/kg body weight; Domitor, Orion) mixture dissolved in saline. The depth of anesthesia was monitored by the pinch withdrawal reflex throughout the surgery. Core body temperature was monitored using a rectal probe and temperature maintained at 37-38°C by a feedback-controlled heating pad (ATC2000, World Precision Instruments). Eyes were protected from dehydration during the surgery with eye ointment (Oculotect Augengel). The scalp overlaying the longitudinal fissure was removed, and a custom head-fixing head-plate was mounted on the skull with cyanoacrylate-based glue (Super Glue Precision, Loctite) and dental cement (Jet Denture Repair Powder) to allow for subsequent head fixation during video-oculographic tracking. Mice were returned to their home cage after anesthesia was reversed with an intraperitoneal injection of flumazenil (0.5 mg/kg body weight; Hameln) and atipamezole (2.5 mg/kg body weight; Antisedan, Orion Pharma) mixture dissolved in saline, and after recovering on a heating pad for 1 h.

Visual Stimulation

Visual stimulation was generated and presented via Python-based custom-made software (as EyeLoop Extractor modules). The visual stimulus was presented on a “V”-shaped dual-monitor setup (monitor size 47.65 × 26.87 cm, width x height) positioned 15 centimeters from the eye at an angle of 30° from the midline of the mouse. Each display thus subtended 115.61° in azimuth and 80.95° in elevation. This setup was adapted from a previous study (Rasmussen et al., 2021), which enabled us to cover most of the visual field of the mouse to evoke consistent visual responses. To evoke the optokinetic reflex in Frmd7 knockout and wild-type mice, we presented a square-wave drifting grating simulating binocular rotation. Drifting gratings were presented in eight trials for 30 s at a time with 4 s of the gray screen between presentations and were drifted in two different directions along the horizontal axis (0° and 180°; monocular and binocular; parallel and anti-parallel) with a spatial frequency of 0.05 cycles/° and a speed of 5°/s.

Rodent Video-Oculography

The mouse was placed on a platform with its head fixed to prevent head motion interference. Head fixation was achieved using a metallic plate implanted cranially. To minimize obstruction of the visual field-of-view, a 45° hot mirror was aligned above the camera and lateral to the rodent. The camera was positioned below the field-of-view due to space constraints in our experimental setup. Two PC monitors were positioned as described in subsection Visual Stimulation. Behind the right monitor, a near-infrared light source was angled at 45°. A CCD camera (Allied Vision Guppy Pro F-031 1/4″ CCD Monochrome Camera) was connected to the PC via a dedicated frame grabber PCIe expansion card (ADLINK FIW62). Using an EyeLoop Importer, vimba.py for Vimba-based cameras, the camera frames were fed to EyeLoop in real-time (fixed at ∼120 Hz). Finally, the standard EyeLoop data acquisition module continuously logged the generated tracking data.

Software Availability

The software described here – EyeLoop – is freely available online, see https://github.com/simonarvin/eyeloop. For extensive sample data and information, see https://github.com/simonarvin/eyeloop_playground.

Principles of EyeLoop

EyeLoop is based on the versatile programming language, Python 3 (Python Software Foundation), using no proprietary software modules. Contrary to other frameworks used for eye-tracking, such as LabView (Sakatani and Isa, 2004), MATLAB (Cornelissen et al., 2002), or ISCAN (de Jeu and De Zeeuw, 2012; Yaramothu et al., 2018), Python is open-source software and has recently seen a surge in popularity, generally credited to its outstanding software modularity and standard code library (Muller et al., 2015). Similarly, EyeLoop’s internal algorithm is modular: Experiments are built by combining modules, native or otherwise, with the Core engine (Figure 1).

FIGURE 1

Figure 1. Schematic overview of the EyeLoop algorithm and its applications. (A) Software overview. The engine exchanges data with the modules. The Importer module imports camera frames in a compatible format (A 1 ). The frame is binarized and the corneal reflections are detected by a walk-out algorithm (A 2 ). Using the corneal reflections, any pupillary overlap is removed, and the pupil is detected via walk-out (A 3 ). Finally, the data is formatted in JSON and passed to all modules, such as for rendering (A 4 ), or data acquisition and experiments (A 5 ). (B) Occlusion filtering. EyeLoop tracks 32 points along the pupil contour by default. By computing the statistical mean of the data points and the standard deviation difference to the mean, EyeLoop filters the points to discard “bad” markers. Users can increase the number of markers in-code to produce better fits. (C) EyeLoop accepts a variety of animal eyes, including rodents, non-human primates, and humans, by employing distinct mathematical models for each type of eye.

Internally, EyeLoop consists of two domains: An engine and an array of external modules. The engine detects the pupil and corneal reflections, while the modules essentially import or extract data to and from the system, respectively. Extractor modules are thus commonly used for data acquisition or experimental schemes, such as closed loops. In turn, Importer modules import video sequences to the system, such as from a camera feed.

The graphical user interface is a module as well, enabling users to adapt the system to any application, such as optogenetic experiments or educational schemes. Generally, EyeLoop’s high modularity greatly improves its compatibility across software versions, hardware specifications, and camera types.

The engine processes each frame of the input video sequentially (Figure 1A 1 ): Each video frame is received by the EyeLoop engine as it is externally triggered, for instance, by an automatic video feed (e.g., using a consumer-grade web-camera), or manually (e.g., using research-grade cameras by TTL or BNC). This enables users to synchronize EyeLoop to external behavioral or electrophysiological systems.

After receiving the video frame, it is binarized, filtered, and smoothed by a Gaussian process (Figure 1A 2 , 3 ). While EyeLoop provides an estimated initial set of parameters for video frame thresholding and filtering based on the pixel distribution, users are typically required to optimize the parameter set to obtain ideal processing conditions, e.g., high contrasts and smooth contours. This is done using key-commands, see Default graphical user interface in Supplementary Materials.

Next, the coordinates of the corneal light reflections and the pupil are selected manually by user input (Figure 1A 2–4 ). Based on this initial position estimate, EyeLoop detects the contours of the pupil and corneal reflections based on a novel variation on Sakatani and Isa (2004) iterative walk-out method. Our vectorized algorithm extracts the four cardinal axes and x diagonals from the image matrix (where x can be any integer). Specifically, the diagonals are given by the variable step-sizes m and n according to the definition D = d mi, nj . The cardinal axes and diagonals are mapped onto Boolean matrices, which are used to mask the thresholded video feed. This provides targeted “array views” of the video frame matrix that can be tested against a binary condition to detect edges. Since the pupil consists of white pixels in the thresholded transform (value = 1), the first occurrence of a black pixel (0) in the array view is returned as an edge position. This is achieved via the Python module NumPy:

d i a g o n a l _ e d g e = n u m p y . a r g w h e r e ( v i d e o [ d i a g o n a l _ m a s k ] == 0 )

The detection of the pupil/corneal reflection contours are thus reduced to repeated matrix computations (extract view, test binary condition, return contour points, …), which can be distributed across multiple central processing unit (CPU) cores during runtime for advanced use-cases. In contrast to the conventional iterative method, our vectorized approach enables computational operations to run in well-optimized C-code, which greatly benefits its efficiency. Likewise, the vectorized method ports easily to efficient, low-level machine code, e.g., via Numba.

The walk-out algorithm generates a matrix of points along the ellipsoid contour that is subsequently filtered based on the distance of each point from the mean (Figure 1B). Specifically, EyeLoop computes the mean of the contour matrix, the difference of each point from this mean, and the standard deviation of the set of distances from the mean. Points that are located more than 1 standard deviation from the mean are discarded. Since the mean approximates the center of the pupil, filtering performance can be improved by increasing the number of data points (by varying the diagonal step size). In general, more data points offer better tracking accuracy at a slight cost to tracking speed. The data generated by EyeLoop for this article was based on 32 contours points, which is also the default setting. This number strikes a balance between speed and accuracy for video frame sizes of up to 300 × 300. At larger video frame sizes, the number of contour points should be elevated as well to account for a larger pupillary circumference in video coordinates.

The ellipsoid outlined by the contour points is next parameterized and modeled as either a general ellipsoid shape (suitable for off-axis recordings, cats, rodents, …) (Halır and Flusser, 1998; White et al., 2010; Hammel and Sullivan-Molina, 2019) or a perfect circle (on-axis recordings in human, non-human primates, rodents, …) (Kanatani and Rangarajan, 2011). Notably, in cases where visual obstructions are significant, e.g., eyelids, whiskers, and shadows, EyeLoop may benefit from the more restrictive fitting of a perfect circle. On the other hand, when the eye is captured significantly off-axis, the video distortion of the pupil might make the general ellipsoid fitting more suitable (Świrski et al., 2012). Thus, the choice of the fitting algorithm extends beyond the animal species (Banks et al., 2015), and should include considerations about the video conditions, especially the camera angle, as well (Świrski et al., 2012). We have used circular tracking for all of this article’s data, since the camera angle was on-axis (orthogonal), and pupil shapes were round (mouse, human, primate). Human and non-human primate data are available in Supplementary Videos 1, 2.

Finally, as the next video frame is received by the Importer, the pupil and corneal reflection positions are re-estimated based on the previous frame’s ellipsoid fit center. In cases where the position of the pupil/corneal reflection deviate excessively between frames, e.g., due to blinking, EyeLoop falls back on a robust, yet more computationally expensive ellipse-detection algorithm based on the Hough transform: The most probable ellipsoid is selected based on position, size, and pixel distribution. If no suitable ellipsoids are detected, e.g., due to the eye being closed, the frame is marked as a blink. When a suitable ellipsoid is detected, the pupil center is reset, and EyeLoop’s contour detection is applied again.

Together, this vectorized, mixed-algorithm approach enables EyeLoop to consistently run at speeds of more than 1,000 frames per second, operated solely by the CPU. By contrast, cutting-edge deep learning methods on the CPU currently peak at speeds near 50 frames per second (Mathis and Warren, 2018).

Results

EyeLoop vs. DeepLabCut

DeepLabCut is a new deep neural network method for marker-less pose estimation (Nath et al., 2019), which is increasingly being applied to eye-tracking (Meyer et al., 2020). Due to its high accuracy and robustness, DeepLabCut presents an excellent eye-tracking reference for EyeLoop. The main disadvantage of DeepLabCut is its hardware intensiveness, requiring a dedicated processing unit for real-time operation (Mathis and Warren, 2018). Besides, the initial setup of DeepLabCut is time-consuming, generally spanning several hours of manual image labeling and subsequent neural network training. By contrast, EyeLoop operates at very high speeds on the general-purpose CPU with minimal initial setup needed (Figure 2).

FIGURE 2

Figure 2. EyeLoop compared to DeepLabCut eye-tracking. (A) Schematic comparison. DeepLabCut requires several steps of setup before tracking can be initiated. Eye-tracking using DeepLabCut on the CPU is limited to ∼50 Hz. In contrast, EyeLoop requires minimal setup and runs at speeds greater than 1,000 Hz on the CPU. (B) Data comparison. EyeLoop and DeepLabCut produce similar data despite a significant gap in computational load. Green and purple lines are EyeLoop and DeepLabCut data, respectively. Red and gray lines are EyeLoop’s and DeepLabCut’s framerate, respectively.

To generate the reference dataset, we trained a DeepLabCut neural network to detect 8 points along the pupil periphery. We then fitted an ellipsoid to DeepLabCut’s data points, which were confirmed to have ideal eye-tracking accuracy by visual inspection. The comparison shown in Figure 2B and Supplementary Video 3, reveals a high similarity between DeepLabCut and EyeLoop’s eye-tracking data, both in terms of absolute coordinates (0.015 ± 0.518 px) and ellipsoid fitting (0.357 ± 0.438 px2). Generally, EyeLoop slightly underestimated the ellipsoid area compared to DeepLabCut. The reason for this is shown in Figure 1B: Since EyeLoop optimizes its detection of the contour by filtering its data points around occlusions, it will inherently tend to underestimate the true pupil outline. This underestimation can be minimized by increasing the number of data points. Notably, in the case presented here, EyeLoop uses 32 data points to extract the pupil contour – a good balance between speed and accuracy – while DeepLabCut suffices with 8 points. This difference in quantity is explained by DeepLabCut’s general robustness at detecting image features, specifically the true outer contour of the pupil, while ignoring false contours, e.g., eyelid overlap or reflections. Since EyeLoop is based on a more specific algorithm, it benefits from a higher quantity of markers to reduce artifacts from noise and obstructions.

Despite this operational difference, EyeLoop operates at processing speeds greater than 1,000 Hz on a consumer-grade CPU (Intel i7 8700K, single-core performance), which far exceeds the speeds currently achievable with DeepLabCut on the CPU (∼50 Hz, Intel Xeon E5-2603 v4, multi-core performance) and with a high-end GPU (200–500 Hz, GTX 1080 Ti), even when significantly downsampled (Mathis and Warren, 2018). High processing speeds are critical for several types of experiments, including closed loop experiments that require very fast feedback, and experiments examining delicate eye movements, such as micro-saccades (> 600 Hz), post-saccadic oscillations (> 500 Hz), and fixation (Juhola et al., 1985; Nyström et al., 2013). Moreover, high-frequency sampling provides a high signal-to-noise ratio, making statistical tests less laborious (Andersson et al., 2010). These findings altogether demonstrate that EyeLoop is a valuable alternative to DeepLabCut for high-speed eye-tracking. Yet, when speed is of no concern, or when the video material is of poor quality (e.g., contains frequent whisking, blinking), DeepLabCut may be a better choice for more robust eye-tracking performance.

Open-Loop Experiment

To demonstrate the utility of EyeLoop in open-loop experiments, we designed an Extractor module that modulates the brightness of a monitor based on the phase of the sine wave function (Figure 3 and Supplementary Figure 1B). Using this design, we examined the pupillary reactivity to a light stimulus in awake mice.

FIGURE 3

Figure 3. Open-loop experiments reveal pupillary reactivity dynamics in mice. (A) The setting used for eye-tracking in mice. A hot mirror is positioned beside the mouse and above the camera (A 1–4 ). A monitor displaying the visual stimulus is positioned facing the mouse (A 3 ), while a near-infrared light source is placed in the back (A 2 ). (B) Open-loop experiment. A sine function is mapped onto the brightness of a monitor, producing oscillations in the pupil area. (C) Plots from three open-loop experiments with frequencies 1, 6, and 12 cycles/min. (D) Constriction speed (v c ) and dilation speed (v d ) for each frequency calculated using the first derivative of the pupil area plots. The centerline is median, box limits are 25th and 75th percentiles, and whiskers show the minimum and maximum values. *P < 0.05, **P < 0.01, n.s., not significant, Wilcoxon signed-rank test. More concretely, the size of the pupil is modulated by a special class of intrinsically photosensitive cells in the retina that projects to the upper midbrain and modulates pupil size in the pupillary light reflex (Lucas et al., 2003; Markwell et al., 2010). Accordingly, as the light dims, the pupil dilates to let more light through the iris onto the retina. Crucially, pupillary reactivity to light is a common parameter by which clinicians assess patient neurological status. For example, it has been demonstrated that abnormal pupillary light reactivity correlates with elevated intracranial pressures, possibly reflecting undiagnosed disease (Chen et al., 2011). Providing an accessible method to assess pupillary reactivity thus presents an attractive clinical use-case of EyeLoop. To examine pupillary reactivity, we modulated the brightness of a PC monitor via three sine-wave frequencies ranging from 1 to 12 cycles/min; with increasing frequency, the monitor brightness cycled more rapidly through dim and bright settings. Using this setup, our findings confirm, first, that pupil size entrains to monitor brightness by inverse proportionality, a predictable consequence of the pupillary light reflex (Figure 3C). Second, using the pupil area’s first derivative, we found that the pupillary constriction speed dominates the speed of dilation in mice, which mirrors findings in humans (Figure 3D; Ellis, 1981). Taken together, these findings show that EyeLoop is well-suited to examine pupillary reactivity in living subjects. Optokinetic Reflex in Congenital Nystagmus Model vs. Wild-Type Mice Often, neurological disorders, such as an undiagnosed brain hemorrhage or Horner’s syndrome, generate distinct abnormalities of the eyes. Similarly, patients suffering from congenital nystagmus exhibit flickering eye movements due to a failing optokinetic reflex. Detecting such neuropathological manifestations is crucial for early clinical diagnosis and biomedical research protocols. To show how EyeLoop may be applied to these ends, we confirmed previous findings showing that Frmd7 hypomorphic mice lack the horizontal optokinetic reflex; similar to Frmd7-mutated congenital nystagmus patients (Yonehara et al., 2016). More concretely, we compared the optokinetic reflex of wild-type and Frmd7 knockout mice, in which exon 4 of Frmd7 was deleted from the genome, thus aiming to extend phenotypic reports on the hypomorphic genotype (Yonehara et al., 2016). To evoke the optokinetic reflex, we simulated a rotational motion using a bilateral drifting grating stimulus (Figure 4A). As expected, for wild-type mice, the optokinetic reflex was faithfully evoked (Figure 4B), whereas the reflex was absent in Frmd7 knockout mice (Figure 4C). EyeLoop thus successfully verified the Frmd7 hypomorphic phenotype in the complete knockout strain. FIGURE 4 Figure 4. The horizontal optokinetic reflex is absent in Frmd7 knockout mice. (A) Rotational motion simulation using gratings drifting in parallel along the horizontal axis. (B) Eye movements evoked by the optokinetic reflex in wild-type and Frmd7™ mice in response to drifting grating stimulation. The azimuth represents the horizontal angular coordinate of the eye. (C) The optokinetic reflex was quantified as eye-tracking movements per minute (ETMs), computed by thresholding the first derivative of eye movements, as described by Cahill and Nathans (Cahill and Nathans, 2008). Error bars show standard deviation. *P < 0.05, Wilcoxon signed-rank test. Discussion Conventional systems for eye tracking are typically tailored to large eyes, such as in human patients or non-human primates. For this reason, these systems often perform less accurately in rodents, where whiskers and eyelids tend to occlude the pupil. EyeLoop filters out occlusions by generating a highly detailed pupil marking. Thus, EyeLoop presents an attractive system for rodent biomedical research to investigate disease models, such as congenital nystagmus (Figure 4). Similarly, we recently applied EyeLoop in our lab to monitor the optokinetic reflex and investigate optic flow computations in visual cortices (Rasmussen et al., 2021). EyeLoop fills an important gap as a tool to investigate the role of the eyes in brain processes. Sensory integration is complex, and often the eyes play an instrumental role in its orchestration. Eye-tracking during sensory exploration carries enormous information on how the senses are used by the brain: For example, during fast whole-body rotation, the eyes act to stabilize the gaze via the vestibulo-ocular reflex by integrating both vestibular and visual signals (Fetter, 2007). Yet, despite the known complexities of sensory computations, visual experiments are usually aimed at strictly monitoring the eyes (Meyer et al., 2020) or at applying one-sided stimuli in open loops (de Jeu and De Zeeuw, 2012). EyeLoop integrates the eyes as experimental items, providing pupil parameters for online processing and analysis. EyeLoop’s very high speed enables rapid experimental loops that are crucial for investigating visual and neural dynamics. Indeed, fine eye movements, such as post-saccadic oscillations and micro-saccades, are only discernible at high sampling frequencies (preferably greater than 1,000 Hz) (Juhola et al., 1985; Nyström et al., 2013), which is currently offered by no other open-source software than EyeLoop. Similarly, investigating the dynamics of neural learning and plasticity requires a very precise timing of learning cues, e.g., based on pupil size and arousal state (McGinley et al., 2015; Costa and Rudebeck, 2016; Wang et al., 2018; de Gee et al., 2020). EyeLoop’s seamless integration of experimental protocols, via its Extractor class, enables researchers to design loops that iterate at high speeds (> 1,000 Hz) to reveal causal relations of neural dynamics. Future experiments could thus apply EyeLoop to silence or stimulate specific neuronal populations via optogenetics to investigate the causality between neuronal activity and the endogenous parameters by which the nervous system operates (Grosenick et al., 2015).

Limitations of the Study

The accuracy of EyeLoop hinges on the quality of the video frames, so illumination and contrasts should be optimized to get the best results. Additionally, EyeLoop is vulnerable to frame-to-frame inconsistencies, such as after prolonged blinking. To counter this vulnerability, EyeLoop falls back on the Hough Transform in cases where its main algorithm fails. This enables EyeLoop to run on inexpensive hardware at very high speeds, yet at a cost on robustness compared to well-trained, deep learning-based approaches (Nath et al., 2019). Along the same vein, EyeLoop’s edge detection is vulnerable to visual obstructions that cannot be sufficiently filtered by thresholding and Gaussian mapping, such as dense whiskers and significant eyelid overlap. Deep-learning methods, however, are often limited to offline processing due to hardware-intensive operations. Despite these limitations, EyeLoop provides an attractive balance between speed, accuracy, and robustness, which enables high-speed closed-loop experiments by high-level programming.

Data Availability Statement

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://github.com/simonarvin/eyeloop.

Ethics Statement

The animal study was reviewed and approved by Danish National Animal Experiment.

Author Contributions

SA and KY conceived, designed the project, interpreted the data, and wrote the manuscript. SA developed the software and analyzed the data. SA and RNR performed the experiments. All authors contributed to the article and approved the submitted version.

Funding

RNR was supported by the Lundbeck Foundation Ph.D. Scholarship (R230-2016-2326). KY was supported by the Lundbeck Foundation (DANDRITE-R248-2016-2518; R252-2017-1060), Novo Nordisk Foundation (NNF15OC0017252), Carlsberg Foundation (CF17-0085), and European Research Council Starting (638730) grants.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

We thank Jude Mitchell for video footage of marmoset pupil, Zoltan Raics for developing our visual stimulation system, and Bjarke Thomsen and Misugi Yonehara for technical assistance.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fncel.2021.779628/full#supplementary-material

References

Andersson, R., Nyström, M., and Holmqvist, K. (2010). Sampling frequency and eye-tracking measures: how speed affects durations, latencies, and more. J. Eye Mov. Res. 3, 1–12. doi: 10.16910/jemr.3.3.6 CrossRef Full Text | Google Scholar

Cahill, H., and Nathans, J. (2008). The optokinetic reflex as a tool for quantitative analyses of nervous system function in mice: application to genetic and drug-induced variation. PLoS One 3:e2055. doi: 10.1371/journal.pone.0002055 PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, J. W., Gombart, Z. J., Rogers, S., Gardiner, S. K., Cecil, S., and Bullock, R. M. (2011). Pupillary reactivity as an early indicator of increased intracranial pressure: the introduction of the neurological pupil index. Surg. Neurol. Int. 2:82. doi: 10.4103/2152-7806.82248 PubMed Abstract | CrossRef Full Text | Google Scholar

Costa, V. D., and Rudebeck, P. H. (2016). More than meets the eye: the relationship between pupil size and locus coeruleus activity. Neuron 89, 8–10. doi: 10.1016/j.neuron.2015.12.031 PubMed Abstract | CrossRef Full Text | Google Scholar

de Gee, J. W., Tsetsos, K., Schwabe, L., Urai, A. E., McCormick, D., McGinley, M. J., et al. (2020). Pupil-linked phasic arousal predicts a reduction of choice bias across species and decision domains. eLife 9:e54014. doi: 10.7554/eLife.54014 PubMed Abstract | CrossRef Full Text | Google Scholar

de Jeu, M., and De Zeeuw, C. I. (2012). Video-oculography in mice. J. Vis. Exp. e3971. Google Scholar

Halır, R., and Flusser, J. (1998). Numerically Stable Direct Least Squares Fitting of Ellipses. Prague: Citeseer. Google Scholar

Juhola, M., Jäntti, V., and Pyykkö, I. (1985). Effect of sampling frequencies on computation of the maximum velocity of saccadic eye movements. Biol. Cybern. 53, 67–72. doi: 10.1007/bf00337023 CrossRef Full Text | Google Scholar

Kanatani, K., and Rangarajan, P. (2011). Hyper least squares fitting of circles and ellipses. Comput. Stat. Data Anal. 55, 2197–2208. doi: 10.1016/j.csda.2010.12.012 CrossRef Full Text | Google Scholar

Markwell, E. L., Feigl, B., and Zele, A. J. (2010). Intrinsically photosensitive melanopsin retinal ganglion cell contributions to the pupillary light reflex and circadian rhythm. Clin. Exp. Optom. 93, 137–149. Google Scholar

Mathis, A., and Warren, R. (2018). On the inference speed and video-compression robustness of DeepLabCut. Cold Spring Harbor Laboratory. [Preprint]. 457242. doi: 10.1101/457242 CrossRef Full Text | Google Scholar

Nyström, M., Hooge, I., and Holmqvist, K. (2013). Post-saccadic oscillations in eye movement data recorded with pupil-based eye trackers reflect motion of the pupil inside the iris. Vision Res. 92, 59–66. doi: 10.1016/j.visres.2013.09.009 PubMed Abstract | CrossRef Full Text | Google Scholar

Rasmussen, R., Matsumoto, A., Dahlstrup Sietam, M., and Yonehara, K. (2020). A segregated cortical stream for retinal direction selectivity. Nat. Commun. 11:831. Google Scholar

Świrski, L., Bulling, A., and Dodgson, N. (2012). “Robust real-time pupil tracking in highly off-axis images,” in Proceedings of the Symposium on Eye Tracking Research and Applications ETRA ’12, (New York, NY: Association for Computing Machinery), 173–176. Google Scholar

Yaramothu, C., Santos, E. M., and Alvarez, T. L. (2018). Effects of visual distractors on vergence eye movements. J. Vis. 18:2. Google Scholar

Open source eye tracking platform.

Adapt & Extend

Pupil Core is used for a diverse range of research purposes. The headset is modular, durable, and lightweight.

Use Pupil Core’s API to connect to other devices. Easily add custom features by writing a plugin in Python. Load plugins at runtime in the app.

Adapt our hardware and software to suit your needs. Build novel prototypes.

See API Docs

웹캠 시선 추적 소프트웨어

웹캠을 입력 소스로 사용하는이 편리한 응용 프로그램을 사용하여 눈의 움직임을 추적하고 비디오 녹화물을 만드십시오. 온라인 라이브 데모 획기적인 발견을 할 수 있기 때문에 다양한 과학 실험을 수행하는 것은 보람이있을 수 있지만 그중 많은 것을 실행하면 그렇지 않습니다. 적절한 도구없이 가능합니다. 하지만 계속되는 관심 덕분에 …

키워드에 대한 정보 아이 트래킹 오픈 소스

다음은 Bing에서 아이 트래킹 오픈 소스 주제에 대한 검색 결과입니다. 필요한 경우 더 읽을 수 있습니다.

이 기사는 인터넷의 다양한 출처에서 편집되었습니다. 이 기사가 유용했기를 바랍니다. 이 기사가 유용하다고 생각되면 공유하십시오. 매우 감사합니다!

사람들이 주제에 대해 자주 검색하는 키워드 아이 트래킹이란 무엇이며 어떻게 작동 하는가 | CoolTool의 데모 비디오

  • Eye Tracking
  • Market research
  • Survey
  • Questionnaire
  • research
  • innovation
  • new technology
  • data collection
  • cooltool
  • eye tracking
  • eyetracker
  • neuromarketing
  • heatmap
  • online surveys
  • tobii
  • low cost eye tracker
  • neuromarketing research
  • consumer behavior
  • effective
  • visible
  • advertising
  • marketing
  • ad
  • advertisement
  • market research
  • advertising research
  • eye tracker

아이 #트래킹이란 #무엇이며 #어떻게 #작동 #하는가 #| #CoolTool의 #데모 #비디오


YouTube에서 아이 트래킹 오픈 소스 주제의 다른 동영상 보기

주제에 대한 기사를 시청해 주셔서 감사합니다 아이 트래킹이란 무엇이며 어떻게 작동 하는가 | CoolTool의 데모 비디오 | 아이 트래킹 오픈 소스, 이 기사가 유용하다고 생각되면 공유하십시오, 매우 감사합니다.

See also  초보 골프 용어 | [골프클래스] 골프초보가 알아야하는 필수 골프용어 ㅣ 스코어, 골프장, 플레이, 스윙 용어 166 개의 자세한 답변

Leave a Comment