big english 教材学完到什么水平

君,已阅读到文档的结尾了呢~~
扫扫二维码,随身浏览文档
手机或平板扫扫即可继续访问
乐宁教育揭秘培生朗文Big English教材
举报该文档为侵权文档。
举报该文档含有违规或不良信息。
反馈该文档无法正常浏览。
举报该文档为重复文档。
推荐理由:
将文档分享至:
分享完整地址
文档地址:
粘贴到BBS或博客
flash地址:
支持嵌入FLASH地址的网站使用
html代码:
&embed src='/DocinViewer--144.swf' width='100%' height='600' type=application/x-shockwave-flash ALLOWFULLSCREEN='true' ALLOWSCRIPTACCESS='always'&&/embed&
450px*300px480px*400px650px*490px
支持嵌入HTML代码的网站使用
您的内容已经提交成功
您所提交的内容需要审核后才能发布,请您等待!
3秒自动关闭窗口  以后要去交智商税的,自己就去。毕竟我也管不着。
  @晴天宝宝呀
16:20:57  来向前辈学习来了  -----------------------------  你还在学习嘛?
  听写part20      
  有四处单词是不知道怎么写,其余的都是听写错误。
  感觉上听懂了实际没有真正地懂!有少数细节需要深挖,有两处短句是听不出来短句而写不出来的。
  以后还是少打嘴仗,多干点实际的事情。
  做听写练习有效吗?转载一篇文章  著作权归作者所有。  商业转载请联系作者获得授权,非商业转载请注明出处。  作者:光头奶爸  链接:/question//answer/  来源:知乎  题主说的夸大是什么意思,您没解释,我不太明白。就英语学习方法来说,我个人认为听写是非常重要的,对于英语初级学习者而言可以说是最重要的。如果您听写了很久但效果不好,不能就说听写没用,我认为很可能是方法不对。正好另外有个问题大家要求我讲一下自己对英语学习方法的理解,我就来谈谈我个人的认识,下面都是干货,没有讲故事,没有抖机灵。需要注意的是,所有观点仅仅是我自己的浅见,不一定正确,大家选择性、批判性吸收吧。1、听是说、读、写的基础。听为什么重要,为什么英语初学者要从听开始。因为听的能力为说、读和写提供了基础。听决定说这个道理大家都理解,只有经过大量的听,模仿别人怎么说,自己才能开口说话。先天失聪的人往往会失去说的能力,这是很大的不幸。可见听对说是很重要的。听决定读的能力,很多人都没有认识到,对于自己读不懂的问题,往往归结为自己单词量不够,或者语法知识不够。比如这个问题:如何提高英文阅读水平? - 光头奶爸的回答。我个人感觉,英语听力对英语的阅读有着很大的影响作用。听的能力提高了,阅读的能力会有质的变化。因为说到底,英语是一种注音语言,英文直接和发音相对应。比如《麦田守望者》里面这句话:“Wait a second, willya?” I said. "I'm asking you a question. Did they say what time they'd be back,or didn't they?"以及“I'll keep in touch with you and all when I'm gone, if I go. C'mon. Take that off your head. C'mon, hey, Pheob. Please. Please, willya?”上面两句话是Holden对自己的妹妹Old Pheob说的话,为了体现说话者的感觉,作者完全把口语发音作为书面语,像Willya,C'mon,这样的例子在英文小说中并不鲜见。再比如哈金的《A Free Life》中的话"We were told zat he is on zer plane."Nan Often mismanaged the interdental sound that the Chinese language doesn't have.这本小说里作者大胆将很多中国口音英语的表达直接作为书面语放在书中。如果听力提高了,在阅读的初始阶段读书会有很强的发音感,好像有人在你耳朵边给你念书,理解上面的话完全不是问题。更重要的是,如果你通过听能够完全理解说话者的意思,在这种状态下,理解句子不过是一个发音转化的过程,完全不会有问题。当然,有人会说,很多专家说了阅读要想提高速度就不能一个词一个词的“读”,特别是不能读出声来,否则阅读速度就无法提高。专家的话是没错,但是提高阅读速度是打好基础以后盖楼房的问题,如果您听得基础都没打好,理解都成问题,就想着盖快速阅读这栋高楼就是本末倒置了。听对写作能力的提高作用和阅读是相似的道理。前面说了,听多了就会说了,对于英语来说会说了自然就会写了(因为英语是注音语言啊),如果您口语能力足够强,像Obama那样复杂句型张口就来,还发愁把这些句子写出来吗?听说能力强的人,在写英语的时候也会有很强的发音感,好像脑袋里有个小人在滔滔不绝的给你讲句子,自然会下笔如有神。当然,在英语入门之后,阅读对写作能力的提高会起到更大的促进作用,这个会在下文进行说明。2、听要确保信息的有效输入信息有效输入是指通过听确实掌握说者的信息。要想做到这一点看似简单,其实不容易。要做到这一点起码要保证两点:首先,选择听力材料的难度要适合自己。如果您之前完全没有听力基础,那么一定要选择语速慢、单词量小、实时更新(为什么要保证语音材料的新鲜度下面会讲到)的听力材料。有些人因为听不懂英语受了打击,发誓要攻克听力难关,于是迎难而上,一开始就要听60 minutes,This American Life这种时间长,词汇量大的听力材料,结果往往是左耳进右耳出,听到的话整个“模糊一片”,听过以后一点信息都没有理解,过几天就放弃了。我觉得初学者比较适合从VOA Special English开始,选择5分钟左右的新闻音频进行听写。有人说这个我也听不懂怎么办?有没有更简单的?就我个人的知识面而言,还真是不知道更好的材料。我认为,只要学过几年英语的,完全听不懂VOA Special很少见,通常在大量重复后还是能听出一些信息的,这个时候就需要花大力气去攻克那些听不出来的地方(方法后面会讲),如果这个难度都坚持不下来那就不是英语学习方法的问题了,而是学习毅力的问题了。其次,一定要听写,也就是把听到的每个单词一个字不差的写下来。这样做看似效率不高,浪费时间,但这是保证您确实听懂每个单词的最有效的方法。不信您可以做一个实验,一段中文新闻,您完全听懂了,然后把它听写下来,这个时候会发现那些听懂的句子里面藏着很多您根本没有在意的单词。因为听懂一句话其实是一个很整体、很概括的感觉,如果不是把每个单词都写下来,很多细节信息是不会注意的。英语听写的目的,就是让初学者确保自己把整个“模糊一片”的语音里,每一句话,每个单词,每个细节都能够分离出来,通过大量的练习让单词一个一个蹦出来,然后理解它们。经过了这个过程,才能够建立像汉语一样将单词重新处理成整体含义的能力。3、听要确保输入方式有效这句话看似和第二点完全一样。其实我是要说另外一个问题,就是听写的时候要不要看原文。这是一个困扰很多英语学习者的问题,众说纷纭,比如现在排名第一的答案 @Matrix Frank 就认为应该看。但是我的观点是:听写的时候一定不要看原文。可以预见,很多人会抗议,不看原文反复听这个方法太笨了,实在太折磨人了,效率太低了根本就是浪费时间。我就解释一下这样做的原因,以及不看原文听懂音频的方法。首先,人类理解问题是有路径依赖的,会本能选择最容易的路径来理解不懂得东西,如果你在听的同时在阅读,大脑会本能的选择通过读来理解这个材料。这是本能,完全不受人的控制。上面是我自己总结出来的道理,不知道有没有这方面的专家出来证明一下。我见过很多人,听英文录音的同时手里拿着一本原文材料,一边听一边看;或者看美剧的时候下面放着英文字幕;这时候通常感觉非常良好,有时候感觉没啥听不懂的,有些难得材料感觉还是听懂了很多单词的,问题关键在于有些单词没见过,也就是提高词汇量的问题。可是等到考试的时候,或者遇到没有字幕的电影、美剧,碰到滔滔不绝的老外,感觉听力材料的难度忽然提高了,听不懂了。
  其实不客气的说,您本来就听不懂,而且一直听不懂,只是您以前通过阅读帮助您抓住了这些词,证明您的听觉、视觉都没有问题,可以讲两者对应起来,如此而已。如果不信,您现在边看原文边听英文来听一段录音,给自己听的水平打个分,过一段时间(把阅读获得的信息忘记以后)再不看原文同样的材料再听一遍,您会发现听力水平有着明显的下降。不看原文听写,就是要打破阅读的路径依赖,打破应试教育造成的只会阅读不会听说的问题,完全通过(并且只能通过)听来获得英语信息,建立听力理解这条通道,并且把它提升到(甚至高于)阅读理解这条通道的水平。说到这里,我要多说一句,我并不推荐初学者使用美剧作为听力材料,因为抛开字幕的影响不讲,美剧的人物的动作、表情、场景设计等会对理解听力内容起到提示作用,这种路径依赖对于初学者来说也不是好事,这个问题参见我的另一个答案:适合学习英语口语的美剧有哪些? - 光头奶爸的回答。不看原文,怎么听懂呢?其实方法很多:通过自然拼音来猜测这个单词:英语是注音语言,如果英语发音规则掌握了,通过发音可以猜测出这个单词的拼写,之后通过词典可以找出这个单词;通过上下文猜测这个单词:猜出这个单词应该表达什么意思,然后通过词典去找这个意思对应的单词里发音一致的单词;通过辅助材料理解单词:如果您听得是最新的新闻,那么通过查阅国外新闻网站,阅读相关的报道,可以对这个听力材料建立一定的认识,通过这些材料的佐证,也能帮助听着理解单词;(这就是为什么我在前面强调听力材料的时效性,时效性高的新闻可以通过国外新闻网站,甚至新闻联播的内容来进行佐证)如果上面的方法都失败了,那么就把这个单词交给时间去解决,也许过一段时间就会忽然开窍。我的经验,一个5分钟的难度合适的材料,如果每天听一个半小时,听了两个星期还是没有听懂的单词可以放弃,或者去看原文来理解了。说到这里又要说一个题外话:需要背单词吗?背还是不背,这是一个争论不休的问题,似乎没有明确的答案。我的观点是要背,但是要用正确的方法背,最基本的不要捧着单词书重头到尾来背,而是要把听写中遇到的、阅读中碰到的单词记下来,并在以后不断重复。相关的方法又是一长段话,我先不展开了,有机会再细讲。听写4、听要向说、读、写扩展对于初学者,一个5分钟的音频,每天听1到2个小时,听半个月基本可以听写下来了。很多人会选择另一个音频资料继续听写,这是非常错误地。请注意,这个时候听写工作并没有结束,而是只完成了一半,后面更是有重头戏,就是说、读、写。前面讲了,听对说读写的基础性作用。反过来,说、读对听也有很强的促进作用。前提是发音正确(起码是自己听自己感觉发音正确)。通常听力好的人发音正确率会高,相反也成立,发音准确的人听力也会比较高。如果一个单词您的发音本身是错误的,很难保证您能够听出来,就像汉语拼音,南方长大的人高考语文第一道题得分就是比北方低,这是从小发音不同的问题,没有办法。那么问题来了,初学者发音本来就不标准(这里的不标准和有口音是两个概念,如果您能讲一口牛津腔当然好,如果就是印度口音也不影响您的交流),没有人纠正我该怎么提高呢?答案就是听向说扩展。把听写好的音频材料放到外放,音量要大,还是按照听写的方式,一句话一句话播放录音,每一句话都反复重复播放,同时您按照自己写下来的文本跟读,将自己的声音和播音员的声音完全一致,完全融合在一起,包括节奏、语气、声调,任何方面都要一致;然后下一句。这样反复练习。这个过程也要注意几点:说英语的时候一定要把嘴张大,越大越好,说汉语的人容易不张嘴说话,把嘴张大有利于把音发准;要注意th, V这两个音,一个咬舌,一个咬唇;要感觉到脸部肌肉的疲劳,如果没有疲劳感表明发音还是中式发音。这一步完成以后,基本上这个听力材料可以背下来了,这个过程其实也是阅读的过程。此外,在进行上面听写过程的同时,别忘了同时进行阅读的训练,我初期是使用《英语世界》杂志,相信现在有很多材料可以看,具体方法看这里:如何看英文书去提升自己整体英文水平? - 光头奶爸的回答;同时别忘了找个人进行英语对话,开口说话不要想着技巧,不要想着正确与否,不要考虑话题,找一个爱说的老外,然后开始。要点看这里:怎么开口讲英语? - 光头奶爸的回答最后,one more thing,我没有讲语法的问题,这个可以看这里:英语学习中错误的常识(一)================日更新======================这段时间小孩生病,工作事多,一直没有更新,也没有回答评论里的问题,今天中午抽空来简单回答一下:问:那是不是要先学好音标,会根据音标背单词…不好意思,英语基础差答:学音标很重要,我跟人觉得学音标和学字母ABC一样是最基础的基础,我有幸遇到了一个好老师,初中叫我们英语的时候,在教ABC的同时把音标一起学了。我个人感觉如果学了一两年英语应该没有完全不懂音标的吧。如果不幸真的不懂,那就必须重头学。学习音标最重要的是两点:1是每个音标的发音一定要发对,这个发错了以后任何单词看字典等于白看,因为看了音标发出来的是错的,发音错了就会导致听不懂,说不对;2是从学音标那天开始,任何一个生词,在查的时候一定要查音标,很多人查生词只看汉语意思,看完就完了。这是极其错误的,查了生词要先看音标,这个单词怎么读,在记忆的时候嘴里读、耳朵听、手上写,三个同时进行来记忆这个单词,这是基本方法。问:请教一下,单句听写多少词适宜,15词之内还可以一遍听写下来,>15就各种混乱,需要一个词一个词…… 问:请问听写时,是听完一个句子就按暂停写下来,还是反复听不暂停直到全部写出来?答:上面两个问题我一起回答,答案就是随便你,你怎么舒服怎么来,以能听写下来了为准则,在这个准则下面只要不看原文,其它怎么做随便你,没有规定。关键是开始,开始做了以后你慢慢会找到适合自己的方法。这个就像学游泳,你不下水,在岸上把每一个动作的细节想的越清楚,越不敢下水游,等你下了水,游起来了,你会发现之前想的细节都不是问题。我一开始用一个小时只能听1分钟的VOA Special,而且无法全部听明白。当时最搞笑的是把Israel听成了is real,还奇怪播音员干嘛总是“是真的”。后来想起来都是笑话。如果您的听力实在太差,连音标都不懂,最基本、最简单的词都听不出来,可以折中一下,跟着录音读原文,反复读,读到自己感觉和播音员发音一模一样了。然后,把这段文章放一段时间,过一个月再拿出来,不看原文听写,这个时候发现听力好多了,但是还是会有很多词听不出来。这样慢慢提高。问:请问您说的 This American Life有安卓app吗?答:安卓我不太清楚,您可以去他们的网站下载,http://thisamericanlife.org。我是在iTunes里面订阅的。有很多人觉得IPhone是用来装逼的,那是他们不知道IPhone背后整合了很多优秀的资源,除了App还有iTunes里的radio,University公开课,Podcast音频视频等等,都整合的非常好。
  另外一篇。  著作权归作者所有。  商业转载请联系作者获得授权,非商业转载请注明出处。  作者:金伟榕  链接:/question//answer/  来源:知乎  听写是一个很有效的听力 测试
手段,但 不 是一个高效的听力 提高 手段。因为听写单位时间效益不高,费时、耗神,不利于提高学英语的兴趣,所以建议不要多做。我以前在外交学院教英语专业本科生时从来不让他们搞听写:我关心的是他们如何才能快快提高听力、尤其是英语的综合运用能力,而不在乎他们是否听懂了一篇材料里的每一个单词。听力需要解决的问题是:1)听懂;2)理解 ------ 有时候由于缺乏语感或背景知识,即便一句话的每个单词都听懂了,但是却无法理解整句话的意思;3)记住,包括如何抓住重点。初练听力者往往“听懂了,理解了,摘下耳机就把一大半忘了!” 练习听力时需要注意的要点:一是听的时候脑子里千万不能用中文来理解的意思。你如果有本事一边听着英文一边就能翻译成中文,就已经能去当同声传译了,还讨论什么听力? 一开始的时候谁都会觉得“不想中文我怎么能理解呢”? 答案是坚持,这是一个人人要过的关口,过一段时间你会习惯用英文直接理解的。二是不要让一些非关键性的单词耽误你。象 of 和 have, to 和 the,初练听力的时候很容易混淆。练听写的人往往就与这类难点顶上牛了!但是我建议:不要搞太多的听写、去“硬抠”。你把一句话的意思听明白了,某个词究竟是 to 还是 the, 如果不妨碍你对整句话的理解,就放它过去。你要琢磨语法、词汇搭配的话,读的时候再去琢磨,省时省力,效率更高。既然你目前阶段听力本来就不行,又何苦“以己之短,攻人所长”?具体做法:一篇新的听力资料,头2、3遍一定要从头到尾连贯着听,不要逐字逐句听,这个时候不求听懂绝大部分内容,而只求听懂一个大概。这样2、3遍连贯着听下来,哪怕你只懂了几个单词,也没关系,这在听力的新手来说完全正常,根本不必责难自己(哪怕你觉得自己阅读理解的水平已经达到了《新概念》第四册都已经学完的水平、但从没练过听力)。第二步,逐字逐句听。一开始,把一个比较长的句子一次全部听下来,可能会有困难:听到句尾,早就已经忘了句子开头说什么了。这也不奇怪:就连最好的口译都需要靠笔记来帮助自己的记忆,你练听力还处于摸索着“找门道”的阶段,记不住,又有什么奇怪的?逐字逐句听的时候,可以把一句话拆成几个部分来听,一听完一小部分,立刻口头重复,说一遍。这样做,效果会比听写好:你不仅必须知道你听到了什么,而且你在模仿发音。一个你连念都念不像样的单词,混在一个32个单词的长句里,你还能指望立刻听懂吗?在这第二步中,遇到难点要来回多听几遍,力求听懂。但如果某一句话或句子的某1、2个单词听了4、5遍还没听出来,那就先放一放,往下听。后面的内容常常会为前面的句子提供解释。比如,有一新手听一篇新闻,一开头听到 SMP 下跌了XXX,但是怎么也不明白SMP到底是什么意思。对这几个字母,他觉得“我肯定没听错,就是这几个字母!”这条新闻后面的内容说到了股票价格等等。如果他对股票市场市场有一些了解的话,他听到后面可能就会想到:这 SMP, 肯定是自己听错了,应该是 S&P = Standard AND Poor's 股票指数。可是他如果不往下听、却跟这“SMP”顶牛的话,那就太得不偿失了。逐字逐句听完了一遍后,最好能再次逐字逐句听一遍:上一遍如果每隔4、5个单词你就不得不暂停一下、做口头重复的话,这回就每次尽可能多听几个单词再暂停、做口头重复。这样你逐步就会拉长停顿的间隔,慢慢就能记住整句话的内容了(但可能还是不能记住长句、或一句话里的每一个单词 ------- 那也很正常)。对于特别困难的难点,我主张大家不必追求把每一个单词都听懂。一篇听力练习,你听明白了90 - 95%的意思,就行了。这不是“不求甚解”,而是力求高效地利用时间,更重要的是保持自己学习英语的兴趣和热情。如果你非得“搞清楚、写对、拼对每一个单词”,那我建议你留心一下自己所花时间的比例:你很可能会发现自己把30%、40%的听力时间用来对付那屈指可数的几个单词、甚至放下听力(=打断思路)去查哪个单词的准确拼法了!这30、40%的时间,是最累人、最能搞糟你的情绪的。你为那么几个单词窝上一肚子火、毁掉自己的情绪、损害对英语的兴趣,值吗?与其“顶牛”,不妨换条路子、更科学、更高效地利用你的英语学习时间:1.
  把本来用于听写的时间,用于口头重复:听一句话或几个字就马上嘴里重复、模仿出来。凡是听力尚需提高的人,大多数发音也不完美。而听觉灵敏度的训练,正是改善发音的第一步(如果你连发音的差异都听不出来,又怎能发现和矫正自己的发音错误?)。所以一边听,要一边模仿和揣摩发音,这样既练了听力,又同时能发现和纠正自己发音的不足之处。2.
  归纳、收集一下刚刚听到的好词好句。“好词好句”远不仅是生词,而是各种好的表达法。很多时候你明明每个单词都认识,可让你自己说的时候你却“打死我也想不起来会那样去表达”。这就是你该留意、收集的。3.
  整理完好词好句后,立刻利用这些词语加上你原先掌握的词汇,口头复述一下你听到的内容(不是背诵,而是用自己的话语来陈述同一内容)。听写的一大弊端,是听写时往往“只见树木,不见森林”:只想着把每一个字都听出来、写下来、拼对,却没注意理解、记住全文究竟说的是什么。而复述则强迫你“要看到树木,更要看到森林”,帮助你逐步养成“听的时候抓本质、记内容”的习惯。此外,不少人的英语口语能力聊天没问题,可一旦需要他们迅速组织起自己的思想,有条不紊、清晰准确地口头陈述出来,就心有余而力不从心了。复述,就是对后一种口头表达能力非常好的训练(问问已经工作的人:同事中凡是嘴笨、只会埋头苦干却不善让人家知道自己的本事和功劳的人,有几个没“死得很难看”?)。4.
  做上述这些的练习时,你的听力、词汇、发音、口语、语感每天都在同步提高。英语说到底是一个整体,这些方面与阅读等都是密不可分的。你的语汇、语感、发音逐步增强了,你听的时候开始懂得如何抓重点、听到某个词汇时知道如何注意查找与它相关的关键词语了,这些无疑对你提高听力是极大的帮助。5.
  从你用脑的效率来说,你死盯着一件事做,比如每天1.5  ~ 2小时的听写,必定很快就觉得非常疲倦,越到后面效率越低,更别说做完听写后你的学习精力、效率会受多大影响了。而你照上述几种学习方法掺揉在一起进行的话,由于你在适时变化学习方法和角度,人就不易疲劳。所以,每天花同样的时间,这样的听力+综合能力训练,其效率和效果远不是听写所能比的。
  发布了图片      
  听写part21,有四个单词是本来不会的,其余的是听写错误。把namely听成mainly,把economic听成economical,本该是closer听成了close。
  转载一篇文章,可以用来说明我为什么死磕发音。  Phonics is a method for teaching reading and writing of the English language by developing learners' phonemic awareness—the ability to hear, identify, and manipulate phonemes—in order to teach the correspondence between these sounds and the spelling patterns (graphemes) that represent them.  The goal of phonics is to enable beginning readers to decode new written words by sounding them out, or in phonics terms, blending the sound-spelling patterns. Since it focuses on the spoken and written units within words, phonics is a sublexical approach and, as a result, is often contrasted with whole language, a word-level-up philosophy for teaching reading (see History and controversy below).  Since the turn of the 20th century phonics has been widely used in primary education and in teaching literacy throughout the English-speaking world. More specifically synthetic phonics is now the accepted method of teaching reading in the education systems in the UK and Australia.
  错了,不是上面的东西,原文设置了禁止转载,算了。
  不过可以截图嘛!        
  半小时听写检查part22,讲高温杀室内害虫的。有三个专业名词是听不懂,其余的是发音错误没有听出来。其中把full six听成了four six了。
  听写part23,关于老师讲论文写作的。有四个单词没有写出来,不影响理解。
  把......are due in six weeks 听成了are doing six weeks。
  听了之后要好好总结才对的啊!
  听写part 24,讲美国的Grand Wood的艺术。错误比part23多。有两句话没有听出来,此外把regionalism听成了originalism。
  昨晚凌晨看星球大战,回来比较晚了。睡到今天12点才起床。
  发布了图片    
  听写part25,有两次听迷糊了,其余的是写错了。把would've cost 写成了would cost,这个要从语法上进行鉴别。此外就是把picked out听成了picked up,还有把century写成了centuary。
  发布了图片    
  老托福part26,错的比较多,有几个名词没有听出来。
  The Key To Learning Pronunciation  Posted on December 5, 2013 by Gabriel Wyner  Lipssmall  As rumor has it, you can’t learn to have a good accent if you’re above the age of 7, or 12, or some other age that you’ve most definitely already exceeded. But that can’t possibly be true. Singers and actors learn new accents all the time, and they’re not, on average, smarter than everyone else (and they certainly don’t all start before the age of 7).  So what’s going on here? Why does everybody tell you that you can’t learn good pronunciation as an adult? And if that’s not true, what is?  In this article, we’ll take a tour through the research on speech perception and pronunciation, and we’ll talk about learning pronunciation efficiently as an adult. But first, allow me a moment on my soapbox:  Pronunciation is important  This is a big topic, and as a singer, it’s a topic close to my heart. I find accents extraordinarily important.  camera-150x120  This is a fényképez?gép  For one, if you don’t learn to hear the sounds in a new language, you’re doomed to have a hard time remembering it. We rely upon sound to form our memories for words, and if you can’t even comprehend the sounds you’re hearing, you’re at a disadvantage from the start. (Try memorizing Hungarian’s word for camera, fényképez?gép [recording] or train station, vásutállomás [recording]. These words are brutal until you really get a feel for Hungarian sounds.)  But in addition to the memory issue, a good accent connects you to people. It shows people from another culture that you’ve not only taken the time and effort to learn their vocabula you’ve taken the time to learn how their mouths, lips and tongues move. You’ve changed something in your body for them – you’ve shown them that you care – and as a result, they will open up to you.  I’ve seen this repeatedly when I sing or watch concerts in Europe. As a rule, audiences are kind, but when you sing in their native language, they brace themselves. They get ready to smile politely and say, “What a lovely voice!” or “Such beautiful music!” But beneath the surface, they are preparing for you to butcher their language and their heritage before their eyes. No pressure.  At that moment, if you surprise them with a good accent, they open themselves up. Their smiles
they are genuine. You’ve shown them that you care, not just with your intellect, but with your body, and this sort of care is irresistible.  But
how do you actually do something about pronunciation?  Research on Ear Training and Pronunciation
  Good pronunciation is a combination of two main skills: Ear training and mouth training. You learn how to hear a new sound, and you learn how to make it in your mouth. It’s the first of these two skills that’ if you can hear a sound, you can eventually learn to produce it accurately, but before then, you’re kind of screwed. So for the moment, we’ll focus on ear training.  While doing research for my book, I came upon a wonderful set of studies by James McClelland, Lori Holt, Julie Fiez and Bruce McClandiss, where they tried to teach Japanese adults to hear the difference between “Rock” and “Lock.” After reading their papers, I called up and interviewed Dr. McClelland and Dr. Holt about their research.  The first thing they discovered is that ear training is tricky, especially when a foreign language contains two sounds that are extremely similar to one sound in your native language. This is the case in Japanese, where their “R” [?] is acoustically right in between the American R [?] and L [?]. When you test Japanese adults on the difference between Rock and Lock (by playing a recording of one of these words and asking them which one they think you played), their results are not significantly better than chance (50%). So far, so bad.  Listening Practice  The researchers tried two kinds of practice. First, they just tested these Japanese adults on Rock and Lock for a while, and checked to see whether they improved with practice.  They didn’t.  This is very bad news. It suggests that practice doesn’t actually do anything. You can listen to Rock and Lock all day (or for English speakers, ?/?/? [bul/pul/ppul] in Korean), and you’re not going to learn to hear the differences between those sounds. This only confirms the rumors that it’s too late to do anything about pronunciation. Crap.  Their second form of practice involved artificially exaggerating the difference between L and R. They began with extremely clear examples (RRrrrrrrrrock), and if participants improved, stepped up the difficulty until they reached relatively subtle distinctions between the two recordings (rock). This worked a little better. The participants began to hear the difference between Rock and Lock, but it didn’t help them hear the difference between a different pair of words, like Road and Load. In terms of a pronunciation training tool, this was another dead end.  Then they tried feedback, and everything changed.  Testing pairs of words with feedback  They repeated the exact same routine, only this time, when a participant gave their answer (“It was ‘Rock'”), a computer screen would tell them whether or not they were right (“*ding* Correct!”). In three 20-minute sessions of this type of practice, participants permanently acquired the ability to hear Rs and Ls, and they could do it in any context.  Not coincidentally, this is how actors and singers learn. We use coaches instead of computerized tests, but the basic principle is the same. We sit with an accent coach and have them read our texts. Then we say our texts out load, and the coach tells us when we’re right and when we’re wrong. They’re giving us feedback. They’ll say things like “No, you’re saying siehe, and I need sehe. Siehe…Sehe. Hear that?” And as we get closer, they’ll keep continue to supply feedback (“You’re saying [something that’s almost ‘sehe’] and I need sehe.”) After the coaching, we’ll go home, listen to recordings of these coaching sessions, and use those recordings to provide us with even more feedback.  Now, some caveats.
Participants didn’t reach a full native ability to hear the difference between Rock and Lock. Their accuracy seemed to peak around 80%, compared to the ~100% of a native speaker. Further investigation revealed what was going on.  Chords  Consonant sounds have lots of different components (known as ‘formants‘). Basically, a consonant is a lot like a chord on a piano: on a piano, you play a certain combination of notes together, and you hear a chord. For a consonant, you make a certain (more complex) combination of notes, and you hear a consonant. This isn’ if you have a computerized piano, you can even use it to replicate human speech.  English speakers tell the difference between their R’s and L’s by listening for a cue known as the 3rd formant – basically, the third note up in any R or L chord. Japanese native speakers have a hard time hearing this cue, and when they went through this study, they didn’t really get any better at hearing it. Instead, they learned how to use an easier cue, the 2nd formant – the second note in R/L chords. This works, but it’s not 100% reliable, thus explaining their less-than-native results.  When I talked to these researchers on the phone, they had basically given up on this research, concluding that they were somewhat stumped as to how to improve accuracy past 80%. They seemed kind of bummed out about it.  Possibilities for the future  But step back a moment and look at what they’ve accomplished here.  In three 20-minute sessions, they managed to take one of the hardest language challenges out there – learning how to hear new sounds – and bring people from 50% accuracy (just guessing) to 80% accuracy (not bad at all).  What if we had this tool in every language? What if we could start out by taking a few audio tests with feedback and leave with pre-trained, 80% accuracy ears, even before we began to learn the rest of our language?  We have the tools to build trainers like this on our own. All you need is a spaced repetition system that supports audio files, like Anki, and a good set of recorded example words (A bunch of rock/lock’s, thigh/thy’s, and niece/knee’s for English, or a bunch of sous/su’s, bon/ban’s and huis/oui’s for French). They take work to make, but that work only needs to be done once, and then the entire community can benefit.  So I’m going to do it.  Pronunciation is too important, and this solution is too valuable to wait for some big company to take over. Over the next 9 months, I’m going to start developing good example word lists, commissioning recordings and building these decks. I’m going to recruit bilinguals, because with bilinguals, we can get recordings to learn not only the difference between two target-language sounds, like sous and su, but also the difference between target language sounds and our own native language sounds (sous vs Sue). I ran this idea by Dr. McClelland, and he thought that may work even better (hell, we might be able to break the 80% barrier). And I’m going to do a few open-ish beta tests to fine tune them until they’re both effective and fun to use.  Hopefully, with the right tools, we can set the “It’s too late to learn pronunciation” rumors to rest. We’ll have a much easier time learning our languages, and we’ll have an easier time convincing others to forget about our native languages and to speak in theirs.
  我刚才转载的文章里面提到了多听并不能提高日本人对R和L的鉴别能力,要有夸张训练猜想,夸张训练不是最有效的,要有有反馈的训练才是最好的。但是怎么去设计这个反馈训练呢?
  听写part27,这段对话差不多两分钟,讲的是一个医学院毕业的乡村医生来给该医学院学生宣传服务农村医疗的职业介绍。
  How to Teach Old Ears New Tricks  Learn a new language more quickly by focusing on pronunciation first  Hi! I'm Gabe. What's your name?”  “Seung-heon. Nice to meet you, Gabe.”  Uh-oh.  “Sorry, I missed that. What's your name again?”  “Seung-heon.”  This is bad.  “Sung-hon?”  “Seung-heon. It's okay—just call me Jerry. Everyone does.”  I hate it when this happens. I have every intention of learning this person's name, and my brain is simply not cooperating. I can't seem to hear what he's saying, I can't pronounce it correctly, and there's no way I'm going to remember it for more than five seconds. Thankfully, these Seung-heon experiences do
in most parts of the English-speaking world, we encounter far more Johns, Susans and Franks than Seung-heons. Generally, we can go about our usual social interactions without much trouble.  When we decide to do something rash like learn a foreign language, however, we run into difficulties. Nearly every new word is another Seung-heon. Our brain struggles to categorize the new sounds in each word—was it Seung, Seong or Sung? —and without the ability to do so accurately, the words do not stick in memory. That aural roadblock is one of the reasons that learning a language as an adult can be so challenging. Fortunately, researchers are starting to find ways to overcome this hurdle. If we train our ears for a few hours before diving into vocabulary and phrases, learning a language can become easier than we ever imagined.  Why We Can't Learn Like Kids  Most of us English speakers can't tell the difference between Seung, Seong and Sung now, but back when we were babies we could. A large body of work shows that babies possess a remarkable ability to distinguish all sounds in all languages. But between six and 12 months of age, they begin homing in on their native language's sounds. They become experts in their own language, and as a consequence they lose their facility with the unfamiliar sounds of foreign languages. As it turns out, it's challenging to regain that ability.  Some of the best data on this phenomenon come from studies of Japanese adults learning to hear the difference between r and l. Why the Japanese? For one, because the r-versus-l
Japanese speakers tend to do little better than chance when attempting to tell their rocks from their locks. Second, they know they have this difficulty, and many will happily volunteer to come into a research laboratory—whereas English speakers do not care much about learning the difference between Hindi's four nearly identical-sounding d's.  When you were a baby, you learned to tell rocks from locks by listening to lots of auditory input. You heard about rakes and lakes, fires and files, and your little brain began figuring out that certain sounds fit into the r-like group and that other sounds fit into the l-like group. Unfortunately, adults do not learn in the same way. In one robust study from 2002, researchers led by psychologist James L. McClelland, then at Carnegie Mellon University, sat Japanese adults down in front of a computer with headphones, played a recording of rock or lock at random, and asked them to press the R or L key on their keyboards accordingly. As expected, they performed terribly, only slightly better than chance. After continuing the test for an hour, straining to hear any hint of the difference between r and l, they still did not improve. Auditory input might work for babies, but it simply does not for adults.  The researchers then tried something new. Same study, same dismal test scores, different Japanese adults. This time, in the training phase of the experiment, researchers gave their test subjects immediate feedback. Every time a subject pressed the R or L button on their keyboard, they got a green check mark or a red X on their screen, indicating whether they were right or wrong. Suddenly, everyone began to learn. Within an hour of testing, subjects were reaching 80 percent accuracy at identifying r and l, even in unfamiliar words. In a similar study in 1999, subjects even began spontaneously pronouncing the two sounds substantially better.  Many studies have subsequently confirmed that feedback is an essential ingredient in training our brain to hear new sounds, and when we can hear new sounds, we naturally start to produce them more accurately. Granted, some sounds may still cause difficulties—just because you can discern a Czech word such as zmrzl doesn't mean that your mouth will cooperate without practice—but overall, a few hours of this type of ear training is a tremendously effective tool for improving listening comprehension, memorization and pronunciation. Yet most language-learning programs dive right into conversation or vocabulary, expecting students to pick up these tough foreign sounds on the fly.
  Pushing beyond the Plateau  The disconnect between research and real-world language training does not end there. Studies that train their students with a small amount of input—just a few words uttered by a single speaker, as you often find in a classroom or a language-study book on tape—fail to produce comparable results in real-world tests where subjects encounter many different words, speakers and dialects. It turns out that the more voices and the more words tested in the lab, the better the results outside of the lab. In a study published in 2013, for example, linguist Melissa M. Baese-Berk, then at Michigan State University, and her colleagues showed that an hour of training over two days on five different varieties of accented English improved understanding of all types of accented English, even totally novel accents. These findings gel with the research about learning foreign sounds—in general, listening to a broad array of speakers will train your brain faster and let you more reliably transfer that knowledge to the real world.  Study after study—including Spanish, Greek and German speakers learning English, Greek speakers learning Hindi, and English speakers learning Mandarin—all confirm that this type of training produces significant changes in the brain's ability to process foreign sounds. And as scientists learn more, they are discovering ways to produce better results. In a 2011 study at Carnegie Mellon, researchers found that people who trained through video games—where they are not explicitly aware of what they are learning—improved more in much less time than when they tried explicit training. Some people might even hone their speech perception skills by training other cognitive brain functions first. In a pilot study not yet published, researchers led by psychologist Erin M. Ingvalson of NorthwesternUniversity found that giving elderly adults exercises to boost working memory and attention span helped them better understand speech sounds in noisy environments. Ingvalson believes that with more research, the same technique may also help foreign-language learners.  As science reveals how the adult brain adapts to foreign sounds, you can start to re-create the successful research results at home. Many language textbooks begin with a list of hard-to-hear words—the rocks and locks you can expect to encounter along the way to fluency. With a handful of recordings of those words (freely accessible through Web sites such
) and with testing software such as Anki (ankisrs.net), you can build powerful ear-training tools for yourself. These are tools that, after just a few hours of use, will make foreign words easier to hear and easier to remember, and they may give you the edge you need to finally learn the languages you've always wanted to learn.  FURTHER READING  Teaching the /r/–/l/ Discrimination to Japanese Adults: Behavioral and Neural Aspects. James L. McClelland, Julie A. Fiez and Bruce D. McCandliss in Physiology & Behavior, Vol. 77, Nos. 4–5, pages 657–662; December 2002.  Bilingual Speech Perception and Learning: A Review of Recent Trends. Erin M. Ingvalson, Marc Ettlinger and Patrick C. M. Wong in International Journal of Bilingualism, Vol. 18, No. 1, pages 35–47; February 2014.  Detailed instructions on how to create an ear-training regimen from free online resources are at /chapter3
  Learning a Second Language: Is it All in the Head?  EVANSTON, Ill. --- Think you haven't got the aptitude to learn a foreign language? New research led by Northwestern University neuroscientists suggests that the problem, quite literally, could be in your head.  "Our study links brain anatomy to the ability to learn a second language in adulthood," said neuroscientist Patrick Wong, assistant professor of communication sciences and disorders at Northwestern and lead author of a study appearing online July 25 in Cerebral Cortex.  Based on the size of Heschl's Gyrus (HG), a brain structure that typically accounts for no more than 0.2 percent of entire brain volume, the researchers found they could predict -- even before exposing study participants to an invented language -- which participants would be more successful in learning 18 words in the "pseudo" language.  Wong and his colleagues measured the size of HG, a finger-shaped structure in both the right and left side of the brain, using a method developed by co-authors Virginia Penhune and Robert Zatorre (Montreal Neurological Institute). Zatorre and Penhune are well known for research on human speech and music processing and the brain.  "We found that the size of left HG, but not right HG, made the difference," said Northwestern's Catherine Warrier, a primary author of the article titled "Volume of Left Heschl's Gyrus and Linguistic Pitch." Anil K. Roy (Northwestern), Abdulmalek Sadehh (West Virginia University) and Todd Parish (Northwestern) also are co-authors.  The study is the first to consider the predictive value of a specific brain structure on linguistic learning even before training has begun. Specifically, the researchers measured the size of study participants' right and left Heschl's Gyrus on MRI brains scans, including calculations of the volume of gray and white matter.  Studies in the past have looked at the connection between brain structure and a participant's ability to identify individual speech sounds in isolation rather than learning speech sounds in a linguistic context. Others have looked at the connection between existing language proficiency and brain structure.  "While our study demonstrates a link between biology and linguistics, we do not argue that biology is destiny when it comes to learning a second language," Wong emphasized. Adults with smaller volumes of left HG gray matter need not despair that they can never learn another language.  "We are already testing different learning strategies for participants whom we predict will be less successful to see if altering the training paradigm results in more successful learning," Wong added.  According to Warrier, Northwestern research professor of communication sciences and disorders, the researchers were surprised to find the HG important in second language learning. "The HG, which contains the primary region of the auditory cortex, is typically associated with handling the basic building blocks of sound -- whether the pitch of a sound is going up or down, where sounds come from and how loud a sound is -- and not associated with speech per se," she said.  The 17 research participants aged 18 to 26 who had their brain scans taken prior to participating in the pseudo second-language training were previously participants in two related studies published by Wong and his research team.  The three studies have identified behavioral, neurophysiologic and, with the current study, neuroanatomic factors which, when combined, can better predict second-language learning success than can each single factor alone.  In a behavioral study, Wong's group found that musical training started at an early age contributed to more successful spoken foreign-language learning. The study participants with musical experience also were found to be better at identifying pitch patterns before training.  In a neurophysiologic study -- again with the same participants -- Wong's team used functional magnetic resonance imaging to observe brain areas that were activated when participants listened to different pitch tones. They found that the more successful second-language learners were those who showed activation in the auditory cortex (where HG resides).  The participants all were native American English speakers with no knowledge of tone languages. In tone languages (spoken by half the world?s population), the meaning of a word can change when delivered in a different pitch tone. In Mandarin, for example, the word "mi" in a level tone means "to squint," in a rising tone means "to bewilder" and in a falling and then rising tone means "rice."  For the study reported in "Cerebral Cortex," Wong's 17 participants entered a sound booth after having their brains scanned. There they were trained to learn six one-syllable sounds (pesh, dree, ner, vece, nuck and fute). The sounds were originally produced by a speaker of American English and then re-synthesized at three different pitch tones, resulting in 18 different "pseudo" words.  The participants were repeatedly shown the 18 "pseudo" words and a black and white picture representing each word's meaning. Pesh, for example, at one pitch meant "glass," at another pitch meant "pencil" and at a third meant "table." Dree, depending upon pitch, meant "arm," "cow" or "telephone."  As a group -- and sometimes in fewer than two or three sessions -- the nine participants predicted on the basis of left HG size to be "more successful learners" achieved an average of 97 percent accuracy in identifying the pseudo words. The "less successful" participants averaged 63 percent accuracy and sometimes required as many as 18 training sessions to correctly identify the words.  "What's important is that we are looking at the brain in a new way that may allow us to understand brain functions more comprehensively and that could help us more effectively teach foreign languages and possibly other skills," said Wong.  Wong's research is supported by grants from the National Institutes of Health.  - See more at: http://www.northwestern.edu/newscenter/stories/2007/07/neuroscience.html#sthash.RNzMHkYV.dpuf
  这篇文章讲成年人学外语与大脑中某个结构大小相关,此外成长期接受过音乐训练对口语学习帮助很大。
  Learning a foreign language can increase the size of your brain. This is what Swedish scientists discovered when they used brain scans to monitor what happens when someone learns a second language. The study is part of a growing body of research using brain imaging technologies to better understand the cognitive benefits of language learning. Tools like magnetic resonance imaging (MRI) and electrophysiology, among others, can now tell us not only whether we need knee surgery or have irregularities with our heartbeat, but reveal what is happening in our brains when we hear, understand and produce second languages.  The Swedish MRI study showed that learning a foreign language has a visible effect on the brain. Young adult military recruits with a flair for languages learned Arabic, Russian or Dari intensively, while a control group of medical and cognitive science students also studied hard, but not at languages. MRI scans showed specific parts of the brains of the language students developed in size whereas the brain structures of the control group remained unchanged. Equally interesting was that learners whose brains grew in the hippocampus and areas of the cerebral cortex related to language learning had better language skills than other learners for whom the motor region of the cerebral cortex developed more.  In other words, the areas of the brain that grew were linked to how easy the learners found languages, and brain development varied according to performance. As the researchers noted, while it is not completely clear what changes after three months of intensive language study mean for the long term, brain growth sounds promising.  Looking at functional MRI brain scans can also tell us what parts of the brain are active during a specific learning task. For example, we can see why adult native speakers of a language like Japanese cannot easily hear the difference between the English “r” and “l” sounds (making it difficult for them to distinguish “river” and “liver” for example). Unlike English, Japanese does not distinguish between “r” and “l” as distinct sounds. Instead, a single sound unit (known as a phoneme) represents both sounds.  When presented with English words containing either of these sounds, brain imaging studies show that only a single region of a Japanese speaker’s brain is activated, whereas in English speakers, two different areas of activation show up, one for each unique sound.  For Japanese speakers, learning to hear and produce the differences between the two phonemes in English requires a rewiring of certain elements of the brain’s circuitry. What can be done? How can we learn these distinctions?  Early language studies based on brain research have shown that Japanese speakers can learn to hear and produce the difference in “r” and “l” by using a software program that greatly exaggerates the aspects of each sound that make it different from the other. When the sounds were modified and extended by the software, participants were more easily able to hear the difference between the sounds. In one study, after only three 20-minute sessions (just a single hour’s worth), the volunteers learned to successfully distinguish the sounds, even when the sounds were presented as part of normal speech.  This sort of research might eventually lead to advances in the use of technology for second-language learning. For example, using ultrasound machines like the ones used to show expectant parents the features and movements of their babies in the womb, researchers in articulatory phonetics have been able to explain to language learners how to make sounds by showing them visual images of how their tongue, lips, and jaw should move with their airstream mechanisms and the rise and fall of the soft palate to make these sounds.  Ian Wilson, a researcher working in Japan, has produced some early reports of studies of these technologies that are encouraging. Of course, researchers aren’t suggesting that ultrasound equipment be included as part of regular language learning classrooms, but savvy software engineers are beginning to come up with ways to capitalise on this new knowledge by incorporating imaging into cutting edge language learning apps.  Kara Morgan-Short, a professor at the University of Illinois at Chicago, uses electrophysiology to examine the inner workings of the brain. She and her colleagues taught second-language learners to speak an artificial language – a miniature language constructed by linguists to test claims about language learnability in a controlled way.  In their experiment, one group of volunteers learned through explanations of the rules of the language, while a second group learned by being immersed in the language, similar to how we all learn our native languages. While all of their participants learned, it was the immersed learners whose brain processes were most like those of native speakers. Interestingly, up to six months later, when they could not have received any more exposure to the language at home because the language was artificial, these learners still performed well on tests, and their brain processes had become even more native-like.  In a follow-up study, Morgan-Short and her colleagues showed that the learners who demonstrated particular talents at picking up sequences and patterns learned grammar particularly well through immersion. Morgan-Short said: “This brain-based research tells us not only that some adults can learn through immersion, like children, but might enable us to match individual adult learners with the optimal learning contexts for them.”  Brain imaging research may eventually help us tailor language learning methods to our cognitive abilities, telling us whether we learn best from formal instruction that highlights rules, immersing ourselves in the sounds of a language, or perhaps one followed by the other.  However we learn, this recent brain-based research provides good news. We know that people who speak more than one language fluently have better memories and are more cognitively creative and mentally flexible than monolinguals. Canadian studies suggest that Alzheimer’s disease and the onset of dementia are diagnosed later for bilinguals than for monolinguals, meaning that knowing a second language can help us to stay cognitively healthy well into our later years.  Even more encouraging is that bilingual benefits still hold for those of us who do not learn our second languages as children. Edinburgh University researchers point out that “millions of people across the world acquire their second language later in life: in school, university, or work, or through migration or marriage.” Their results, with 853 participants, clearly show that knowing another language is advantageous, regardless of when you learn it.  Alison Mackey is professor of linguistics at Georgetown University and Lancaster University.
  这篇文章学过外语的人大脑结构会和只会单语言的人不一样,认知功能会更好。  还讲了日本人听到R和L时,大脑回激活同一区域,而native speakerst听到R和L会激活两个不同的区域。通过软件学习帮助,日本人能在一个小时内开始区分R和L.  设计人工语言,用语法翻译学和沉侵式教学教给成年人,后者方法学习的成年人大脑图形扫描结果更接近native speaker大脑的工作模式。  学过外语的人(不论何时开始学)得痴呆症的概率比单语者要小,岁数也要更晚。
  A first-of-its kind series of brain studies shows how an adult learning a foreign language can come to use the same brain mechanisms as a native speaker. The research also demonstrates that the kind of exposure you have to the language can determine whether you achieve native-language brain processing, and that learning under immersion conditions may be more effective in reaching this goal than typical classroom training. The research also suggests that the brain consolidates knowledge of the foreign language as time goes on, much like it does when a person learns to ride a bike or play a musical instrument.  The latest in this series of studies was published online in today’s PLoS ONE by researchers from Georgetown University Medical Center (GUMC) and the University of Illinois at Chicago.  “In the last few years, research has begun to suggest that adults learning a foreign language can come to rely on the same brain mechanisms as native speakers of a language, and that this might be true even for those parts of a foreign language that are particularly difficult to learn, such as its grammar,” explains Michael Ullman, Ph.D., a professor of neuroscience at GUMC and senior investigator of the studies. “We confirmed this in our studies.”  However, even if it’s true that foreign language learners might be able to achieve native-like processing of grammar, Ullman says it has not at all been clear just how they can get there ? that is, what exactly allows a learner to attain native-like processing.  Ullman and lead author Kara Morgan-Short, Ph.D., from the University of Illinois at Chicago, first tested whether the conditions under which a person learns a foreign language matter. Specifically, is the type of foreign language exposure typically found in classrooms, with a lot of explanations about the grammar, more or less beneficial than the type of exposure in an immersion situation, in which there are no such explanations, but simply many language examples?  “Surprisingly, previous studies have found that the type of exposure typically found in classrooms leads to better learning than that typically found in immersion. However, no studies have looked at the actual brain mechanisms after different types of exposure,” Morgan-Short says. Also, because a foreign language is so slow to learn, previous studies have not examined the outcomes of different types of exposure beyond the early stages of learning, since it would take far too long to wait until participants reached high proficiency, she says.  To get around this problem, the scientists came up with a clever solution. Rather than teach people a full foreign language, they taught them a very small one, with only 13 words, which referred to the pieces and moves of a computer game. The language itself was made-up, and its grammar was constructed so that it was like that of other natural languages, but differed from the participants’ native language English in important respects, such as its grammatical structure.  The scientists found that after a few days, adults had indeed reached high proficiency in the language, whether they had undergone classroom- or immersion-like training. However, measures of brain processing showed that different types of training led to different brain mechanisms.  “Only the immersion training led to full native-like brain processing of grammar,” Ullman says. “So if you learn a language you can come to use native language brain processes, but you may need immersion rather than classroom exposure.” (These results were published online Aug. 23, 2011 in the Journal of Cognitive Neuroscience.)  For the study published in PLoS ONE, the researchers asked another very interesting question: What happens after you’ve reached high proficiency in a foreign language, if you’re not regularly exposed to it? Do you lose the use of any native-language brain mechanisms that you’ve attained? Many learners do not always have ongoing exposure, which makes this is a critical question, Ullman says.  So, without having warned their research participants beforehand, the researchers called them an average of five months later, and asked them to come back for another round of brain scanning. Because the language was made-up, the scientists were sure that the participants hadn’t had any exposure to it during this entire time.  The researchers weren’t sure what they would find, since this was the first study examining the brain after such a period of no exposure. However, previous studies testing only proficiency changes found, not surprisingly, that foreign language learners generally did worse after such periods, so the scientists assumed that the brain would also become less native-like.  “To our surprise, the participants actually became more native like in their brain processing of grammar,” Ullman says. “And this was true for both the classroom and immersion training groups, though it was still the case that only the immersion group showed full native-like processing.”  Ullman believes that, over time, memory of the language was “consolidated” in the brain, probably by the same mechanisms that also underlie native language. He says this process is probably similar to the consolidation of many other skills that a person might learn, such as learning to ride a bike or play a musical instrument.  Interestingly, the participants showed neither improvements nor loss of proficiency during the same five month period, even as their brains became more native like, Ullman says. The scientists are uncertain why this might be, though it is possible that proficiency changes might in fact have been observed with more precise measures, or that improvements had occurred some time after training but then were gradually lost in the absence of practice during the five months.  Ullman says that even without any observed changes in proficiency, the brain changes are important.  “Native language brain mechanisms are clearly well suited to language, so attaining their use is a critical achievement for foreign language learners. We suspect that this should lead to improved retention of the language as well as higher proficiency over time.”  Notes about this brain research article  IN IMMERSION FOREIGN LANGUAGE LEARNING, ADULTS ATTAIN, RETAIN NATIVE SPEAKER BRAIN PATTERN  Support for the PLoS ONE and JoCN studies was provided by the National Institutes of Health, the National Science Foundation, and a Georgetown University Dissertation Fellowship.  Other authors of the PLoS ONE study include Ingrid Finger, Ph.D. and Sarah Grey, Ph.D. candidate. Other authors of the JoCN study include Karsten Steinhauer, Ph.D. and Cristina Sanz, Ph.D.  The authors report having no personal financial interests related to the study.
  上篇文章讲在教室的基于语法的学习和沉倾式学习短期相比前者胜,但是语言学习是个长期过程。于是设计少量词汇语言训练,用基于规则的教学和沉倾式教学教学,显示后者效果好。
  How the Brain Learns a Second Language  Whether or not we agree philosophically with the concept of a “national language,” English is clearly the dominant language in the United States. As such, knowledge of English is an important component to success in this country. The percent of people in the U.S. from non-English speaking nations is growing, which has fostered the search among grade schools, universities, and adult education programs, for the best methods to teach English to non-native speakers.  Research with songbirds and sophisticated brain imaging technologies provide some intriguing insights into how to best accomplish the goals of teaching (and learning) a second language.  “In humans as in songbirds, the sounds produced by the individuals themselves are essential for normal vocal development.”  Whistling Finches and Listening Children  Studies of song development in certain species of songbirds suggest that auditory feedback may be a crucial step in learning language.  Allison Doupe, a professor at the University of California, San Francisco, and postdoctoral fellow Michael Brainard study the way zebra finches develop their characteristic songs. Young male zebra finches learn a single tune early in life from their fathers. Doupe and Brainard have found that this learning process depends on the young finch being able to hear not only its father’s songs, but also its own attempts to vocalize the tune.  This requirement of auditory feedback in songbirds corroborates what has been seen in humans. Researchers came to understand this when, in the early 1970’s, they learned of a child named Genie who had been confined and raised without human contact or stimulation from the age of 20 months to 13 years. As a result, she displayed very abnormal vocalizations, particularly with syntax. Genie was almost completely unable to master things like verb tense, word order, prepositions or pronouns.  It is also known that older children who lose their hearing gradually lose their ability to form words properly. As Doupe and Brainard write in the October 2000 issue of the journal Nature Neuroscience: “These findings provide evidence that, in humans as in songbirds, the sounds produced by the individuals themselves are essential for normal vocal development.”  If auditory feedback is so important in the initial development of language, it stands to reason that it may also be required to learn a second language. Indeed, studies have shown that successful second language learners tend to enhance their communication skills by listening to the radio in the second language or by talking with native speakers. Thus, it appears that the combination of auditory input from the second language and the student’s own work to vocalize that language is key to learning.  Old Dogs Hear New Tricks  Anyone who has tried knows that as we enter adulthood, it is increasingly difficult to learn a second language. A study conducted by researchers at the Center for the Neural Basis of Cognition in Pittsburgh has shown that targeted auditory input can successfully help adults learn a second language.
  Native Japanese speakers normally cannot distinguish between the English “r” and “l” sounds. Sound units of words are called “phonemes,” and studies suggest that as the language centers of our brain mature, certain phonemes are “wired” into those brain centers. , Phonemes that are not essential to the native language are not incorporated, implying that adult brains are simply less receptive to foreign phonemes.  Since the Japanese language does not distinguish between r and l, a single phoneme represents both sounds. When presented with English words containing either of these sounds, brain imaging studies show that only a single region of a Japanese speaker’s brain is activated, whereas native English speakers show different areas of activation for each sound. Learning to distinguish the phonemes might then actually require a “rewiring” of certain elements of the brain’s circuitry.  Jay McClelland, a co-director of the Pittsburgh study, has shown that some kind of plasticity remains even in adult brains. At the annual meeting of the Cognitive Neuroscience Society in 1999, he reported that adult Japanese speakers could learn to hear the difference in “r” and “l” if they were trained with the help of a computer that exaggerated each phoneme’s particular frequency or format. When the phonemes were modified and extended by the computer, the study volunteers were able to hear the difference between the sounds. With an hour’s worth of training, the volunteers could eventually hear the difference between the sounds, even when the phonemes were presented at the speed of normal speech.  The results from the Pittsburgh study suggest that although it may be more difficult to learn a second language as an adult, the same tools we used to initially learn our native languages also help us in acquiring a second language. This research shows that adult English language learners may be more successful if auditory sessions in which phonemes that seem particularly difficult for non-English speakers are extended and exaggerated until they are able to learn them at a normal speed.  How the Brain Makes Way for a Second Language  Studies involving sophisticated brain imaging technologies called functional magnetic resonance imaging, fMRI, have also revealed some intriguing patterns in the way our brains process first and second languages.  Joy Hirsch and her colleagues at Cornell University used fMRI to determine how multiple languages are represented in the human brain. They found that native and second languages are spatially separated in Broca’s area, which is a region in the frontal lobe of the brain that is responsible for the motor parts of language-movement of the mouth, tongue, and palate. In contrast, the two languages show very little separation in the activation of Wernicke’s area, an area of the brain in the posterior part of the temporal lobe, which is responsible for comprehension of language.  The fMRI studies suggest that the difficulty adult learners of a second language may have is not with understanding the words of the second language, but with the motor skills of forming the words with the mouth and tongue. This may explain why learners of a second language can oftentimes comprehend a question asked in the new language, but are not always able to form a quick response.  Thus, for adult English language learners, techniques that emphasize speaking may be more successful than methods that focus more on reading and listening. For example, rather than lecturing to a class about vocabulary and grammar, an instructor perhaps should encourage her adult students to have conversations in English, or to act out short skits incorporating the day’s lesson, which would more closely link the students’ abilities to understand and speak the new language. Speaking would thus equal understanding.  The Cornell researchers also studied the brains of people who were bilingual from a very early age. Presumably, this group of people is able to speak the two languages as easily as they can comprehend both languages spoken to them. The researchers found that these subjects showed no spatial separation in either Broca’s or Wernicke’s areas for the two languages, indicating that in terms of brain activation at least, the same regions of the brain controlled their ability to process both languages.  The idea that second languages learned early in childhood are not separately processed in the brain is sup}

我要回帖

更多关于 biguz english 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信