The results of topic modeling and evolution analyses highlighted several important issues, including epileptic seizure detection, brain–machine interface, EEG classification, mental disorders, emotion, and alcoholism and anesthesia. The results of the keyword analysis showed that “electroencephalogram,” “brain–computer interface,” “classification,” “support vector machine,” “electroencephalography,” and “signal” were the most frequently used. After examining 2053 research articles published around the world, it was found that the annual number of articles had significantly grown from 78 to 468, with the USA and China being the most influential and prolific. Using bibliometrics and topic modeling, research articles concerning AI-enhanced human EEG analysis collected from the Web of Science database during the period 2009–2018 were analyzed. This study aims to present a comprehensive review of the research field of AI-enhanced human EEG analysis. The application of artificial intelligence (AI) technologies in assisting human electroencephalogram (EEG) analysis has become an active scientific field. For the arithmetic operations, parallelism was realized with the identification of the kernel which is the name given to the function running on the GPU. 6), CUBLAS library was chosen since it offers a serial algorithm structured according to the optimal parallelism, the identification of the directories and blocks do not constitute a problem. In the parallelization of the matrix operations (see Fig. An example of the arithmetic operations is the calculation of the sigmoid function and weight updates. The input values and multiplying weight values between the ′′ input and hidden layers values ′′ are examples of matrix operations. In the first step, the matrix operations are parallelized and in the second step, the arithmetic operations are parallelized. In this study, the parallel application of the back propagation neural network algorithm was carried out in 2 steps. As shown in figure that the neural network has a suit- able structure for parallel processing. The matrix operations of a portion of forward neural network calculations steps performed on CUDA are presented in Fig. Then the processes can be calculated independently in each thread. Thus, threads can be created in equal numbers with the element number of the matrix. Apart from matrix operations, the calculation of the activation function in each hidden neuron can be performed in parallel. Therefore, CUDA ′ s shared memory is important in terms of the efficient execution of processing. For instance, in GPU general processes, 400 – 600 cycles are needed to access global memory, howev- er, CUDA only needs 4 cycles to access the shared memory. In applications, shared memory increases the efficiency significantly. CUDA can process matrix multiplication efficiently using the shared memory per block. The mathematical calculations of neural networks, mostly consist of matrix multiplication operations. In each core, there is integer unit (INT) for integer operations and floating point unit (FP) for decimal operations. Each SFU executes one command per hour, for each thread. The SFU executes the commands that carry out operations such as sine, cosine, square root and interpolation. Streaming processor has eight special function units (SFU) which are used to carry out special mathematical operations. With 48 KB shared memory information can be shared between blocks. The graphics card gives memory support up to 1 GB of use. The total CUDA cores (192) are clustered within 4 streaming processors, each one containing 48 cores. Figure 5 shows the architecture of Nvidia Quadro 2000 with an en- larged panel showing the streaming processors. The GF106 architecture has 1.17 billion transistors and 192 CUDA cores. The GF108 architecture has 585 million transistors and 96 CUDA cores. Table 4 summarizes the hardware used in this study. The developed software was created in Visual C++ 2010. Also in the coding, a single precision representation was chosen. For comparisons to be equal, the software was coded in the same programming language. experiments were carried out on the graphics cards Nvidia GeForce GT 525 M with GF108 architecture and Nvidia Quadro 2000 with GF106 architecture, and Intel Core i7- 2670QM 2.20 GHz CPU with an 8GB memory.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |