清华NLP组年度巨献:机器翻译30年最重要论文阅读清单(上)( 三 )


LongZhou,WenpengHu,JiajunZhang,andChengqingZong.2017.NeuralSystemCombinationforMachineTranslation.InProceedingsofACL2017.MatthiasSperber,GrahamNeubig,JanNiehues,andAlexWaibel.2017.NeuralLattice-to-SequenceModelsforUncertainInputs.InProceedingsofEMNLP2017.DennyBritz,AnnaGoldie,Minh-ThangLuong,andQuocLe.2017.MassiveExplorationofNeuralMachineTranslationArchitectures.InProceedingsofEMNLP2017.AshishVaswani,NoamShazeer,NikiParmar,JakobUszkoreit,LlionJones,AidanN.Gomez,LukaszKaiser,andIlliaPolosukhin.2017.AttentionisAllYouNeed.InProceedingsofNIPS2017.LukaszKaiser,AidanN.Gomez,andFrancoisChollet.2018.DepthwiseSeparableConvolutionsforNeuralMachineTranslation.InProceedingsofICLR2018.YanyaoShen,XuTan,DiHe,TaoQin,andTie-YanLiu.2018.DenseInformationFlowforNeuralMachineTranslation.InProceedingsofNAACL2018.MiaXuChen,OrhanFirat,AnkurBapna,MelvinJohnson,WolfgangMacherey,GeorgeFoster,LlionJones,MikeSchuster,NoamShazeer,NikiParmar,AshishVaswani,JakobUszkoreit,LukaszKaiser,ZhifengChen,YonghuiWu,andMacduffHughes.2018.TheBestofBothWorlds:CombiningRecentAdvancesinNeuralMachineTranslation.InProceedingsofACL2018.WeiyueWang,DeruiZhu,TamerAlkhouli,ZixuanGan,andHermannNey.2018.NeuralHiddenMarkovModelforMachineTranslation.InProceedingsofACL2018.QiangWang,FuxueLi,TongXiao,YanyangLi,YinqiaoLi,andJingboZhu.2018.Multi-layerRepresentationFusionforNeuralMachineTranslation.InProceedingsofCOLING2018.YachaoLi,JunhuiLi,andMinZhang.2018.AdaptiveWeightingforNeuralMachineTranslation.InProceedingsofCOLING2018.Zi-YiDou,ZhaopengTu,XingWang,ShumingShi,andTongZhang.2018.ExploitingDeepRepresentationsforNeuralMachineTranslation.InProceedingsofEMNLP2018.BiaoZhang,DeyiXiong,JinsongSu,QianLin,andHuijiZhang.2018.SimplifyingNeuralMachineTranslationwithAddition-SubtractionTwin-GatedRecurrentNetworks.InProceedingsofEMNLP2018.GongboTang,MathiasMüller,AnnetteRios,andRicoSennrich.2018.WhySelf-Attention?ATargetedEvaluationofNeuralMachineTranslationArchitectures.InProceedingsofEMNLP2018.KeTran,AriannaBisazza,andChristofMonz.2018.TheImportanceofBeingRecurrentforModelingHierarchicalStructure.InProceedingsofEMNLP2018.ParniaBahar,ChristopherBrix,andHermannNey.2018.TowardsTwo-DimensionalSequencetoSequenceModelinNeuralMachineTranslation.InProceedingsofEMNLP2018.TianyuHe,XuTan,YingceXia,DiHe,TaoQin,ZhiboChen,andTie-YanLiu.2018.Layer-WiseCoordinationbetweenEncoderandDecoderforNeuralMachineTranslation.InProceedingsofNeurIPS2018.HanyHassan,AnthonyAue,ChangChen,VishalChowdhary,JonathanClark,ChristianFedermann,XuedongHuang,MarcinJunczys-Dowmunt,WilliamLewis,MuLi,ShujieLiu,Tie-YanLiu,RenqianLuo,ArulMenezes,TaoQin,FrankSeide,XuTan,FeiTian,LijunWu,ShuangzhiWu,YingceXia,DongdongZhang,ZhiruiZhang,andMingZhou.2018.AchievingHumanParityonAutomaticChinesetoEnglishNewsTranslation.Technicalreport.MicrosoftAI&Research.MostafaDehghani,StephanGouws,OriolVinyals,JakobUszkoreit,LukaszKaiser.2019.UniversalTransformers.InProceedingsofICLR2019.
注意力机制(AttentionMechanism)
DzmitryBahdanau,KyunghyunCho,andYoshuaBengio.2015.NeuralMachineTranslationbyJointlyLearningtoAlignandTranslate.InProceedingsofICLR2015.Minh-ThangLuong,HieuPham,andChristopherD.Manning.2015.EffectiveApproachestoAttention-basedNeuralMachineTranslation.InProceedingsofEMNLP2015.HaitaoMi,ZhiguoWang,andAbeIttycheriah.2016.SupervisedAttentionsforNeuralMachineTranslation.InProceedingsofEMNLP2016.ZhouhanLin,MinweiFeng,CiceroNogueiradosSantos,MoYu,BingXiang,BowenZhou,andYoshuaBengio.2017.Astructuredself-attentivesentenceembedding.InProceedingsofICLR2017.TaoShen,TianyiZhou,GuodongLong,JingJiang,ShiruiPan,andChengqiZhang.2018.DiSAN:DirectionalSelf-AttentionNetworkforRNN/CNN-FreeLanguageUnderstanding.InProceedingsofAAAI2018.TaoShen,TianyiZhou,GuodongLong,JingJiang,andChengqiZhang.2018.Bi-directionalblockself-attentionforfastandmemory-efficientsequencemodeling.InProceedingsofICLR2018.TaoShen,TianyiZhou,GuodongLong,JingJiang,SenWang,ChengqiZhang.2018.ReinforcedSelf-AttentionNetwork:aHybridofHardandSoftAttentionforSequenceModeling.InProceedingsofIJCAI2018.PeterShaw,JakobUszkorei,andAshishVaswani.2018.Self-AttentionwithRelativePositionRepresentations.InProceedingsofNAACL2018.LeslyMiculicichWerlen,NikolaosPappas,DhananjayRam,andAndreiPopescu-Belis.2018.Self-AttentiveResidualDecoderforNeuralMachineTranslation.InProceedingsofNAACL2018.XintongLi,LemaoLiu,ZhaopengTu,ShumingShi,andMaxMeng.2018.TargetForesightBasedAttentionforNeuralMachineTranslation.InProceedingsofNAACL2018.BiaoZhang,DeyiXiong,andJinsongSu.2018.AcceleratingNeuralTransformerviaanAverageAttentionNetwork.InProceedingsofACL2018.TobiasDomhan.2018.HowMuchAttentionDoYouNeed?AGranularAnalysisofNeuralMachineTranslationArchitectures.InProceedingsofACL2018.ShaohuiKuang,JunhuiLi,AntónioBranco,WeihuaLuo,andDeyiXiong.2018.AttentionFocusingforNeuralMachineTranslationbyBridgingSourceandTargetEmbeddings.InProceedingsofACL2018.ChaitanyaMalaviya,PedroFerreira,andAndréF.T.Martins.2018.SparseandConstrainedAttentionforNeuralMachineTranslation.InProceedingsofACL2018.JianLi,ZhaopengTu,BaosongYang,MichaelR.Lyu,andTongZhang.2018.Multi-HeadAttentionwithDisagreementRegularization.InProceedingsofEMNLP2018.WeiWu,HoufengWang,TianyuLiuandShumingMa.2018.Phrase-levelSelf-AttentionNetworksforUniversalSentenceEncoding.InProceedingsofEMNLP2018.BaosongYang,ZhaopengTu,DerekF.Wong,FandongMeng,LidiaS.Chao,andTongZhang.2018.ModelingLocalnessforSelf-AttentionNetworks.InProceedingsofEMNLP2018.JunyangLin,XuSun,XuanchengRen,MuyuLi,andQiSu.2018.LearningWhentoConcentrateorDivertAttention:Self-AdaptiveAttentionTemperatureforNeuralMachineTranslation.InProceedingsofEMNLP2018.AnkurBapna,MiaChen,OrhanFirat,YuanCao,andYonghuiWu.2018.TrainingDeeperNeuralMachineTranslationModelswithTransparentAttention.InProceedingsofEMNLP2018.MahaElbayad,LaurentBesacier,andJakobVerbeek.2018.PervasiveAttention:{2D}ConvolutionalNeuralNetworksforSequence-to-SequencePrediction.InProceedingsofCoNLL2018.