Clinical Brain Training: A How-To Manual


Free download. Book file PDF easily for everyone and every device. You can download and read online Clinical Brain Training: A How-To Manual file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Clinical Brain Training: A How-To Manual book. Happy reading Clinical Brain Training: A How-To Manual Bookeveryone. Download file Free Book PDF Clinical Brain Training: A How-To Manual at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Clinical Brain Training: A How-To Manual Pocket Guide.
Introduction

In a final preliminary analysis, we examined if diagnosis was a significant predictor of the changes in each construct. Recall these participants reported some form of learning difficulty that may stem from a diagnosis of ADHD, dyslexia, or a number of other disabilities. Because such diagnoses could confound the results from the primary analysis, we performed a series of linear regression analyses with diagnosis as the predictor variable and gain scores as the dependent variables to determine the extent to which the diagnoses might need to be controlled.

Data screening indicated no missing data, and all variables were within tolerable ranges for skewness. One member of the mixed delivery group was excluded from the analysis because the participant was not able to participate due to the complexity of the training tasks. Instead, the participant was provided with a different cognitive training intervention designed for younger children.

The former saw a large effect size, with the latter approaching a large effect size. The smallest effect size was measured on working memory and attentional capacity, then IQ score and processing speed, all of which saw small effect sizes. Largely, the results were similar between the groups, suggesting that the mixed delivery model may be a feasible method of scaling the intervention to reach more children while achieving comparable results.

BrainHQ Course Instructor Guide

However, there was a significant difference in outcomes between the groups on long-term memory, which needs to be addressed. The one-on-one delivery group gains from pre-test to post-test were more than twice those of the mixed delivery group. An evaluation of individual components of the delivery models should be considered to address the discrepancy in long-term memory outcomes. That is, what characteristics of the training models may have contributed to the difference?

A check of the fidelity monitoring records indicated the one-on-one delivery group did not spend more training time working on long-term memory tasks with the cognitive trainer than the mixed delivery group. However, the computerized training procedures do not include multitasking components that target memory skills. In the absence of the trainer during the digital training time, there is no method for adding distractions and additional tasks to complete orally on top of the digital exercises.

Thus, the one-on-one delivery group received more exposure to task loading by the cognitive trainer than the mixed delivery group. Although this would seemingly address working memory development, it may have contributed to the difference in long-term memory skills at post-test given the nature of the assessment task. That is, the task measures automaticity of retrieval Schrank , which the mixed delivery participants may not have developed as strongly as the one-on-one delivery group.

After performing a replication study with a much larger sample, programming random distractions or adding a multitasking component to the digital Brainskills program may be a consideration for narrowing this gap on long-term memory outcomes if it indeed exists. There is, however, a small chance that the difference in outcomes is the result of the significant differences on long-term memory scores between the groups at pre-test. The mixed delivery group began the study with pre-test scores almost five points above the population mean while the one-on-one delivery group began the study with pre-test scores just below the population mean.

Therefore, the 6. However, the difference between groups on the long-term memory measure remained significant even after controlling for these pre-test differences in the analysis. To illustrate, consider the difference in how processing speed was assessed versus how it was trained. The assessment task for processing speed required participants to locate and circle pairs of identical images. The cognitive trainer might load the training task by asking the participant to recite the alphabet or count backwards from on beat to the metronome at the same time.

In the intervention, processing speed was also targeted in every training task by using the metronome or a stopwatch, which forced participants to make decisions more rapidly regardless of the construct the task was designed to improve. There are a few limitations and implications for future research in the current study. First, the sample size is small. Although the sample is consistent with sample sizes in similar cognitive training studies, there is a risk that the results are due to the limited statistical power of the design. The field would benefit from a replication study with a larger sample size.

Next, long-term outcomes in the current sample have not been assessed. An important gap in the literature is the long-term retention of cognitive training gains. Although beyond the scope of the current study, it will be important to conduct follow-up assessments of this study sample to determine if improvements in cognitive and functional skills persist as well as they have in prior follow-up assessments following the intervention used in the current study Wainer and Moore A potential limitation may also be that working memory and attentional capacity were measured using the same assessment task.

However, there is research to support the use of digit span tasks in measuring scope of attention and attentional control in children Cowan et al. Indeed, the constructs are related and challenging to disentangle. It is important to note, though, that the inclusion of the associative memory task further delineates the memory constructs.

That means that if we were to simply refer to the digit span task as the attentional control measure in the current study, we are still left with two additional measures of memory: associative memory and delayed recall. Thus, the potential confounding of constructs measured by the digit span task is mitigated. In future research, the addition of a continuous performance test would enable the assessment of divided and selective attention as well. Future research should include formal measures of far transfer to behavioral and functional outcomes.

In the current study, the researchers collected session notes from the cognitive trainers, which documented self-reported improvements noted by the participants. This was a limitation of the current study and is an important area to address in future research. Another limitation of the study is that it was not possible to compare the delivery methods of all the training tasks because only ten of the training tasks were suitable for adaptation to digital format.

This is a limitation of digitization for the field overall, however, because many training procedures lend themselves only to delivery in person.

Page Not Found

Finally, we compared treatment delivery methods that overlapped rather than two methods that were completely distinct. The design creates some difficulty in teasing apart the different mechanisms at work in each method. However, there is substantial ecological validity to the design. The two delivery methods tested in the current study are the methods used in clinics around the world. The question of placebo or expectancy effects often arises when considering the outcomes of intervention studies.

In the current study, these effects are controlled by the design because both groups completed an intervention for the same number of hours in the presence of an adult, and neither group knew there was a difference in delivery methods. However, it is interesting to note that prior research using two control groups to test the presence of placebo effects in cognitive training studies has failed to find any, including two meta-analyses that revealed no statistically significant differences between active and passive controls Au et al.

Thus, we suggest that expectancy or placebo effects were unlikely in the current study as well. Although the authors examined the difference between the one-on-one delivery method and a control group in the first phase of this study Carpenter et al. The results of the current study provide support for the use of mixed delivery in scaling the ThinkRx program to reach more children, including those with neurodevelopmental and learning disorders. The results were consistent with prior studies on ThinkRx with children Carpenter et al.

Special Pricing for Online Buyers

This convergence of evidence is important for determining the benefit of adopting an intervention for use in clinical practice Carey and Stiles The ability to scale a one-on-one intervention has important implications for its use in clinical practice where time is a limiting factor for scheduling clients. The mixed delivery model enables each clinician to see twice as many clients and provide the intervention to more children who need it.

The first and third authors are employed by the Gibson Institute of Cognitive Research, the nonprofit research arm of the interventions described in this paper. The other authors report no pecuniary or professional interest. Skip to main content Skip to sections. Advertisement Hide. Download PDF. Open Access. First Online: 18 September Method Participants A sample of 39 participants between the ages of 8 and 14 were recruited through an email invitation to a large database of families who had contacted the Colorado Springs LearningRx training center for information about the program.

Table 1 illustrates the pre-intervention demographics including diagnoses. One participant in the one-on-one delivery group was on stimulant medication for ADHD, and the medication status remained stable throughout the study. Both groups were tested by the same clinicians. Pre- and post-testing was supervised by a doctoral-level psychologist who was aware of the group assignments. In addition to conducting pre- and post-cognitive testing, the research team met with parents and cognitive trainers before the intervention, at the midpoint, and at the completion of the intervention to document qualitative changes observed in the participants.

Table 1 Pre-intervention demographics for each group. One-on-one delivery Mixed delivery Mean age SD Subtest 10 was also administered to obtain a measure of long-term memory. Although no measure of selective or divided attention is available on the Woodcock Johnson test battery, the Numbers Reversed subtest served as a measure of attentional capacity as well as working memory Mather and Woodcock A description of the tests is listed in Table 2.

Variable WJ III test Description Associative memory Visual auditory learning Participant learns a rebus and then recalls and recites the association between the pictures and the words. Visual processing Spatial relations Participant visually matches individual puzzle pieces to a completed shape. Auditory processing Sound blending Participant hears a series of phonemes and then blends them to form a word.

Logic and reasoning Concept formation Participant applies inductive rules to a set of shapes and indicates the rule that differentiates them. Working memory and attentional capacity Numbers reversed Participant hears a list of numbers and repeats them in reverse order. Long-term memory Visual auditory learning—delayed Participant recalls verbal-visual associations learned earlier by reading rebus passages. The digital training program, called Brainskills, is a computerized version of ten training exercises from the traditional ThinkRx program that progresses by levels of intensity and difficulty like the one-on-one version.

That is, the ten computer-based tasks were designed to mimic ten of the trainer-delivered tasks and were the only tasks that could be digitally adapted with a high level of fidelity. Figure 1 shows an example of both versions of the same exercise: Reasoning Brain Cards. In this inductive logic exercise, participants use a set of rules to identify a three-card group from a set of 12 cards that each has four features: shape, color, size, and orientation.

One task is to identify three cards that share all the same variable. Another task is to identify a card that is not shown but that would complete a set. Open image in new window. Preliminary Results Data screening indicated no missing data, and all variables were within tolerable ranges for skewness. On that measure, the one-on-one delivery group showed greater growth than the mixed delivery group. Table 3 Results of significance testing between groups. Compliance with Ethical Standards Conflict of Interest The first and third authors are employed by the Gibson Institute of Cognitive Research, the nonprofit research arm of the interventions described in this paper.

Alloway, T. Working memory in children with developmental disorders. Journal of Learning Disabilities, 42 4 , — Google Scholar. American Psychiatric Association. Diagnostic and statistical manual of mental disorders 5th ed. Washington, DC: Author. Au, J. Improving fluid intelligence with training on working memory: a meta-analysis. Psychonomic Bulletin Review, 22 2 , — Beck, S.

The radial distance method was used to assess each segmentation technique's ability to detect hippocampal atrophy in 3D. The hippocampal volumes obtained with a fast semi-automated segmentation method were highly comparable to the ones obtained with the labor-intensive manual segmentation method. The AdaBoost automated hippocampal segmentation technique is highly reliable allowing the efficient analysis of large data sets. Alzheimer's disease AD , the most common type of dementia, is a slowly progressing disease affecting a rising number of individuals every year. The structural integrity of AD patients' brains is often compromised decades before they become symptomatic.

MCI is the first stage where cognitive decline can be objectively captured by neuropsychology testing using population-derived normative data. Magnetic resonance MR imaging is an important tool used by medical professionals in the diagnosis of patients with neurodegenerative disorders. It is also used abundantly in clinical research to study disease progression or to examine correlations between atrophy and other variables such as genetic profiles or performance on neuropsychological tests.

Hippocampal atrophy is a widely accepted imaging biomarker for AD Apostolova et al. Disease history studies and clinical trials are nowadays enrolling hundreds of patients and rely on serial MR imaging to capture brain atrophy rates. Manual hippocampal segmentation is a slow and highly labor-intensive approach.

Consequently, it is critical to develop automated brain imaging techniques that can accurately extract hippocampal structures from large datasets while using minimal human operator input. Several studies have proposed automated hippocampal segmentation techniques. One such study involved the patch-based method, which uses expert traces as priors to segment anatomical structures.

In this method, each voxel is labeled individually and its surrounding patch of voxels is compared to patches in the training set in order to match anatomical regions of brain structures Coupe et al. Other studies use deformable shape Yang and Duncan, ; Chupin et al. More recently, segmentation techniques are being developed that incorporate different aspects from these three models Morra et al. Despite that multiple techniques for automated hippocampal segmentation have been developed and embraced by many for analyzing large data sets, there are ongoing concerns in the research community regarding their accuracy given the fact that brain structures, especially the subcortical regions, display significant anatomic complexity and variation.

Several groups are therefore working on approaches to overcome these problems. One group has suggested using template sets that are specific to the age of individuals in the cohort that is being studied Shen et al. Others have suggested that a common online dataset of segmented hippocampi or other anatomical structures should be developed and used as a validation tool Jafari-Khouzani et al. Some researchers state both random and systematic errors in segmentation can be corrected Wang et al.

Random errors, such as structural abnormalities, can be corrected by combining segmentation data from multiple attempts. Systematic errors caused by the misinterpretation of the manually segmented images that serve as primers can be addressed by creating an algorithm that detects them using model errors Wang et al. The challenges brought about by the different factors that may cause incorrect labeling of subcortical structures have led researchers to come up with multiple features that are sensitive to anatomical variation Morra et al.

To label hippocampal tissues correctly, our automated technique takes into account approximately 13, features. Among these features are image intensity, gradients, curvatures, tissue classification maps of gray and white matter as well as CSF, means and standard deviations, Haar filters, and combinations of x, y, and z positions Morra et al. The algorithm's performance has been validated in prior reports, and, when labeling new data previously unseen by the algorithm, it has been found to agree with human raters as well as human raters agree with each other Morra et al.

It has also been found to favorably compare to the automated hippocampal segmentation method from the Freesurfer packet Morra et al. In this study, we compare manual and automated hippocampal segmentation methods in order to establish the reliability and reproducibility of our machine learning based classification technique. We trained our automated segmentation tool, called AdaBoost, with training sets traced by two different experts and compared the volumetric and 3D shape outputs to each other and to the gold standard — manual hippocampal segmentation of the same dataset.

We hypothesized that the three groups of segmentations would produce comparable results. Demographic details of the entire sample have been previously described Petersen et al. The same criteria were used by the ADCS investigators' team in the course of the trial to define conversion to possible or probable AD. Of the 69 sites, 24 opted in the magnetic resonance images MRI sub-study and obtained T1-weighted brain MRI images from consenting clinical trial subjects. Subjects with contraindications to MRI i. The full MRI sample included subjects. Of these subjects, had both a baseline scan of sufficient quality to allow for accurate and reliable hippocampal tracing and sufficient data to be identified as MCI converters MCIc or MCI subjects who remained stable MCInc.

All subjects were included in our manual segmentation analyses. These subjects were excluded from the automated segmentation analyses. Of the subjects two subjects discontinued prior to their first follow-up.

Combined Cognitive Training vs. Memory Strategy Training in Healthy Older Adults

Of the 24 sites, 14 used General Electric, 9 used Siemens and one used Philips scanners. T1-weighted scans with minimum full-time echo and repetition time, partitions, 25 degree flip and 1. Details about the individual protocols have been previously published Jack Jr. T2-weighted scans were inspected for abnormalities such as strokes and major white matter hyperintensities at the Mayo Clinic in Rochester, MN. The data was checked for compliance with the imaging protocol and for quality as explained elsewhere Jack Jr.

Using a 9-parameter linear transformation, each image was separately registered to the ICBM53 standardized brain template Collins et al. All scans were intensity normalized Shattuck et al. The contours included the hippocampus proper, the subiculum, and the dentate gyrus.

AEW remained blinded to the subjects' conversion status and demographic information. Hippocampal volumes were extracted. The automated method that we used in this study is described in detail by Morra et al. Morra et al. The tracers remained blind to demographic and conversion information. We trained AdaBoost - our automated machine-learning hippocampal segmentation algorithm Morra et al. From each training set, AdaBoost developed a set of classification rules to distinguish hippocampal from non-hippocampal tissue.

Examples of such features include image intensity, position, curvatures, gradients and tissue classification maps. The performance of the segmentation models was tested in a testing dataset and subjected to careful visual inspection by both raters and the senior author LGA prior to segmentation of the full dataset. The hippocampal segmentations produced with each of the three methods manual rater 1, automated rater 1 and automated rater 2 were transformed into 3D parametric surface mesh models to normalize the spatial frequency of the digitized surface points.

The medial core - a curve threading down the center of each hippocampus, was computed and the radial distance from this medial core to every surface point of each hippocampus was determined and mapped onto each point of the hippocampal surface producing a distance map. Radial distance — a measure of hippocampal thickness, was compared between MCIc and MCInc for each segmentation dataset manual rater 1, automated rater 1 and automated rater 2 Apostolova et al.

We compared the demographic and conversion status characteristics between the manual and automated cohorts using Student's t -test for continuous and Chi-Squared test for categorical variables. We used single measure intra-class correlation coefficient smICC to compare the segmentation agreement between the three methods. If no lines are formed, then the blocks pile higher and higher until the block pile reaches the top of the screen, at which point the game ends and the player loses.

The goal is to keep the game going as long as possible by forming complete lines with the descending blocks. As the game progresses, the blocks descend faster, giving players less time to choose where to place each block. After playing the game, the game performance total score is recorded in the video game Tetris. We used the actual game performance data to confirm that playing the game improved the trained game performance. Although most scores that participants reported were consistent with the actual scores, we used the actual game scores from the game for our analyses.

To evaluate the transfer effects of the brain training game, we assessed a broad range of the cognitive functions. Short-term memory was measured using DS [30] and SpS [31]. Processing speed was measured using Cd [30] and SS [30]. Visuo-spatial ability was measured using MR [34]. Reading verbal ability was measured using JART [35]. The primary outcome measure was ST [28]. We selected ST as the primary outcome measure because 1 Brain Age is expected to improve executive functions, 2 ST is an often-used task to measure executive functions [54] , and 3 the procedure and score of ST have been standardized [28] , [55].

This test presents participants with a complex visual pattern with a piece cut out of it. The task of the participant is to find the missing piece that completes the pattern. RAPMT are published in two sets. Set I contains 12 diagrammatic puzzles, each with a missing part that one must attempt to identify from several alternatives. It is typically used for practice and to reduce anxiety.

Set II has 36 puzzles that are identical in presentation to those in the practice set. The problems are presented in a bold, accurately drawn, and pleasant looking format to maintain interest and to minimize fatigue. In accordance with manual guidelines, a time limit of 30 min was given for completing the Set II. The primary measure for this task was the number of correct items. The total trial number is Participants were required to sort the cards on the basis of color, shape, or number of figures. The only feedback provided to the subject was whether responses were correct or incorrect.

The rule color, shape, or number was able to be switched as quickly as every tenth trial. The primary measure of this task was perseverative errors. The perseverative error was defined as an incorrect response to a shifted or new category that would have been correct for the immediately preceding category. The perseverative error is the most commonly used measure of WCST.

Stroop task ST measures executive functions including response inhibition and impulsivity. Hakoda's version is a paper and pencil version of the Stroop test. In this test, participants must check whether their chosen answers are correct, unlike the traditional oral naming Stroop task. In the reverse Stroop task, in the leftmost of six columns, a word naming a color was printed in another color e. In the Stroop task, in the leftmost of six columns, a word naming a color was printed in another color e. Subjects had to check the column containing the word naming the color of the word in the leftmost column.

In each task, subjects were instructed to complete as many of these exercises as possible in 1 min. Operation Span OpS measures working memory [29]. Participants solved math problems e. After each set of 3—6 words, participants were asked to recall the words in the set in the order in which they were initially presented. Because this test was administered three times, three versions of the test were used. The primary measure of this task was accuracy of recall of word sets in the correct order. This test evaluates working memory.

For this task, the examiner read a combination of letters and numbers; then participants were asked to recall numbers first in ascending order, followed by the letters in Japanese alphabetical order. If participants responded with letters first, followed by numbers, but with all of them in correct sequence, credit was awarded. LNS begins with the simplest level of a three-letter number sequence. There are five sets of the letters and numbers in increasing length, and each set consists of three trials total 15 trials.

The primary measures of this test are raw scores, which refer to the number of correctly repeated sequences. The maximum raw score is In this task, the examiner reads arithmetic problems; then participants must solve the arithmetic problems without the use of a pencil and paper. This task is a mental arithmetic task. The primary measure of this test is the raw score.

The maximum row score is This test evaluates verbal short-term memory. For DS-F, participants answer numbers in the same order as they were read aloud by the examiner. For DS-B, participants answer numbers in the reverse order of that presented aloud by the examiner. In both, the examiner reads a series of number sequences in which the examinee must answer the sequence in either forward or reverse order.

The maximum span length of DS-F is eight. The maximum span length of DS-B is seven. Each span length consists of two trials. The raw score of DS-F is The raw score of DS-B is The primary measures of these tests are raw scores. This test evaluates visual short-term memory. In SpS, participants must memorize sequences of locations and orders presented on a screen. For each trial, eight squares are shown on the screen; then a sequence of black squares flashed yellow, each square changing color for ms with a ms interval between squares.

At the end of the sequence, participants answer the locations in the same order in which they are presented SpS-F , and in the reverse order SpS-B.

Cognitive Rehabilitation Manual - ACRM

The maximum span length of SpS-F is seven. The maximum span length of SpS-B is six. The raw score of SpS-F is The raw score of SpS-B is The test sheet consists of 12 rows of 50 digits. Each row contains five sets of numbers 0—9 arranged in random order. Consequently, any one digit would appear five times in each row with randomly determined neighbors.

D-CAT consists of three such sheets. Participants were instructed to search for the target number s that had been specified to them and to delete each one with a slash mark as quickly and as accurately as possible until the experimenter sent a stop signal. Three trials were used, first with a single target number 6 , second with two target numbers 9 and 4 , and third with three 8, 3, and 7. Each trial was given 1 minute. Consequently, the total time required for D-CAT was 3 min. In the second and third trials, it was emphasized that all instructed target numbers should be cancelled without omission.

The primary measure of this test is the number of hits correct answers. We used only the number of hits in the first trial. The participant was instructed to press the enter key with the right index finger as quickly as possible when the stimulus appeared. The stimulus reappeared with a random delay ranging from ms to ms , , , , , , , , , and The test has 4 blocks of 50 trials. The total number of trials was trials. The primary measure in this task is reaction time on the SRT. The reasons why we selected the SRT as an attentional measure are 1 previous study suggested the SRT can measure attention [33] [56] , [57] , [58] , [59].

Thus, we considered the SRT as an attentional measure in the present study. This test measures processing speed. For Cd, the participants are shown a series of symbols that are paired with numbers. Using a key, the participants draw each symbol under its corresponding number within a s time limit. The primary measure of this test is the number of correct answers. The SS contains 60 items. For this subtest, the participants visually scan two groups of symbols a target group and a search group and indicate if either of the target symbols matches any of the symbols in the search group.

The participants respond to as many items as possible within a s time limit. Mental rotation MR measures visuo-spatial ability [34]. Participants try to determine whether two simultaneously presented shapes are the same or different. They responded as quickly and as accurately as possible by pressing one of two keys.

Participants completed 10 practice trials followed by test trials. Analyses include only trials in which a correct response was made by the participants. JART is a reading test consisting of 25 Kanji compound words e. The reading stimuli were printed out randomly for reading. The subjects were asked to read each Kanji compound word aloud. This task assesses reading ability and IQ. The primary measure for this task is the number of correct items.

Previous study suggested that differences of subject feeling e. Based on the suggestion, we asked participants to answer the questionnaires related to the subjective feelings 1; motivation of playing the video game during the intervention period, 2; fatigue during the intervention period, 3; satisfaction of the intervention during the intervention period, 4; enjoyment of the video game during the intervention period after the intervention period.

This study was conducted to evaluate the effects of the brain training game on cognitive functions. The pre- and post- training scores in cognitive functions were presented in Table 2. We calculated a change score post-training score minus pre-training score in all measures of cognitive functions Table 3. We conducted the permutation tests of an analysis of covariance ANCOVA for the change scores in each of cognitive tests.

The change scores were the dependent variable, groups Brain Age , Tetris was the independent variable. Pre-training scores in each cognitive test, sex, and age were the covariates to exclude the possibility that any pre-existing difference of measures between groups affected the results of each measure and to adjust for background characteristics. First, the permutation test is a suitable for small sample analysis and is distribution free [63] , [64] , [65] , [66].

Second, the permutation test can correct Type 1 error false positive [67] , [68]. There are some methods e. Bonferroni [69] and Benjamini and Hochberg False discovery rate; FDR [70] correction methods are typical multiple testing correction methods. Permutation test is a typical resampling method [71]. The Bonferroni correction is known to be extremely conservative. It can lead to Type II i. In contrast, FDR method is less stringent, which may lead to the selection of more false positives.

Thus, permutation tests have become widely accepted and recommended in studies that involved multiple statistical testing [67] , [68] , [72]. SS factor is the variation attributable to the factor and SS total is the total variation which includes the SS factor and the sum of squares for error. We also conducted two sample t -tests for questionnaires of the subjective feeling motivation, fatigue, satisfaction, enjoyment.

Moreover, Effect size estimates were calculated using Cohen's d [73]. In particular, we imputed missing values using maximum likelihood estimation based on the expectation—maximization algorism with the observed data in an iterative process [74]. All randomized participants were included in the analyses in line with their allocation intention-to-treat principle. Our sample size estimation was based on the change score in rST please see cognitive function measures.

Tetris in the context of randomized trials. A previous study showed the average score The sample size calculation indicated that the sample size of approximately 16 would achieve a power of 0. As presented in Figure 1 , the 32 participants in this study were randomized into two groups Brain Age and Tetris. The study was completed by 16 of the 16 members in the Brain Age group and 15 of the 16 members in the Tetris group. Table 1 presents the baseline demographics and neuropsychological characteristics of the participants.

Based on intention to treat principle, we imputed missing values of one participant in the Tetris group see Data Analysis. Before analyzing the transfer effects of the brain training game to other cognitive functions, we examined whether the practice improved the performances of the trained games. To evaluate the transfer effect of the brain training game on the improvement of other cognitive functions, we conducted ANCOVA for the change scores in each of the cognitive tests Table 3.

These results demonstrate that the effects of playing Brain Age were transferred to executive functions, working memory, and processing speed. Because the training games in Brain Age required participants to response as quickly as possible, there was a possibility that the performance of SRT would affect the results of improvements of cognitive functions. To check the possibility, we conducted the additional analyses using the SRT score before playing the video games as a covariate.

The results were the similar to the results which did not use the SRT score as a covariate. These results represented in Table 3. These results indicated that the reaction time did not affect the improvements of cognitive functions. These results show that the effects of playing Tetris were transferred to attention and visuo-spatial ability.

To investigate the differences of the subject feelings e. There were no significant differences of the subjective feeling Table 5. The most important findings of this study were that playing the commercial brain training game Brain Age significantly improved executive functions, working memory, and processing speed compared to playing the non-brain training game Tetris in young adults. The present results demonstrated the beneficial transfer effects of the commercial brain training game on widely various cognitive functions in young adults.

Moreover, these results showed that playing Tetris can engender improvement attention and visuo-spatial ability compared to playing Brain Age. These findings are consistent with previous evidence showing that playing video games can engender improvement in untrained cognitive functions [7] , [8] , [11] , [12] , [42]. In cognitive training studies, the transfer effect can be classified also in terms of a near transfer effect and a far transfer effect [76] , [77].

The near transfer effect refers to improvements in cognitive domains that are closely related to the trained cognitive processes. In contrast, the far transfer effect refers to improvements in cognitive domain that are not closely related to the trained cognitive processes. From the viewpoints of the near and far transfer effects, the cognitive measures in this study are divisible into two measures of the transfer effects near and far.

For the Brain Age playing group, executive functions, working memory, and processing speed were the measures of the near transfer effect. The others were measures of the far transfer effects. The reason is that the training domains of Brain Age would be expected to train executive functions, working memory, and processing speed. For the Tetris playing group, attention and visuo-spatial ability were the measures of the near transfer effect; the others were the measures of the far transfer effects because the training of Tetris would be expected to train attention and visuo-spatial ability.

Our results show that playing Brain Age and Tetris had only the near transfer effects, but not the far transfer effects.

viptarif.ru/wp-content/tracker/1482.php Some explanations might be applicable for the absence of the far transfer effect in this study. First, the possibility exists that the training term of our study 4 weeks is not a sufficient time to obtain the far transfer effect.

Second, our video game training was not intended for use as an adaptive training method. Results of previous studies suggest that the adaptive training method is more effective for improvement of cognitive functions than a non-adaptive training program [43] , [78]. The mechanism of the near transfer effects through playing Brain Age can be explained using a recent hypothesis, which proposes that the transfer effect can be induced if the processes during both training and transfer tasks are overlapped and are involved in similar brain regions [8] , [19].

Most training games in Brain Age entail an element of the calculations and readings [8]. To perform these processes, the prefrontal regions [52] , [53] or the precuneus [79] , [80] should be recruited. The executive functions, working memory, and processing speed, which showed a significant transfer effect by the brain training game in this study, also involve the prefrontal cortex [81] , [82] , [83] and the precuneus [84] , [85] , [86] , [87] , [88].

Clinical Brain Training: A How-To Manual Clinical Brain Training: A How-To Manual
Clinical Brain Training: A How-To Manual Clinical Brain Training: A How-To Manual
Clinical Brain Training: A How-To Manual Clinical Brain Training: A How-To Manual
Clinical Brain Training: A How-To Manual Clinical Brain Training: A How-To Manual
Clinical Brain Training: A How-To Manual Clinical Brain Training: A How-To Manual
Clinical Brain Training: A How-To Manual Clinical Brain Training: A How-To Manual
Clinical Brain Training: A How-To Manual Clinical Brain Training: A How-To Manual

Related Clinical Brain Training: A How-To Manual



Copyright 2019 - All Right Reserved