基于深度学习的无监督单目动态场景深度估计综述
Survey on unsupervised monocular depth estimation in dynamic scenes based on deep learning
- 2023年 页码:1-18
网络出版日期: 2023-11-29
DOI: 10.11834/jrs.20233060
扫 描 看 全 文
浏览全部资源
扫码关注微信
网络出版日期: 2023-11-29 ,
扫 描 看 全 文
程彬彬,于英,张磊,王自全,江志鹏.XXXX.基于深度学习的无监督单目动态场景深度估计综述.遥感学报,XX(XX): 1-18
CHENG Binbin,YU Ying,ZHANG Lei,WANG Ziquan,JIANG Zhipeng. XXXX. Survey on unsupervised monocular depth estimation in dynamic scenes based on deep learning. National Remote Sensing Bulletin, XX(XX):1-18
现实世界中不存在完全静态的场景,动态场景下的单目深度估计方法是指从单幅影像中同时获取动态前景和静态背景的深度信息,与传统双目估计方法相比具有运用灵活、成本较低等优势,有着极强的研究意义和广阔的发展前景,在三维重建、自动驾驶等下游任务中起着关键作用。深度学习技术迅速发展,无监督学习不使用真实数据标签,吸引众多学者的研究热情。国内外众多学者为了处理场景中的动态物体相继提出一系列无监督单目深度估计算法,为广大相关领域的研究者奠定了研究基础,但目前尚未有对上述方法进行综合分析的研究。针对这一问题,本文对基于深度学习的无监督单目动态场景深度估计技术进展情况进行了系统性梳理与总结,首先归纳了基于深度学习的无监督单目深度估计的基本模型,分析了动态物体是如何对场景深度估计产生的影响;其次,介绍了单目深度估计研究的常用数据集以及评价指标,对经典动态场景下单目深度估计模型进行了性能对比分析;然后,依据对动态物体的处理方式不同,分别从动态场景鲁棒深度估计和动态物体跟踪与深度估计两个研究方向,进行了总结与定量分析;最后对动态场景单目深度估计的未来发展方向进行了展望。
In the real world
there are no completely static scenes. Monocular depth estimation in dynamic scenes refers to obtaining depth information of both dynamic foreground and static background from a single image
which has advantages over traditional stereo estimation methods in terms of flexibility and cost-effectiveness. It has strong research significance and broad development prospects
playing a key role in downstream tasks such as 3D reconstruction and autonomous driving. With the rapid development of deep learning technology
unsupervised learning without using real data labels has attracted the research enthusiasm of many scholars. Numerous scholars in the domestic and overseas have proposed a series of unsupervised monocular depth estimation algorithms to deal with dynamic objects in scenes
laying the research foundation for researchers in related fields. However
there has been no comprehensive analysis of the above methods. To address this issue
this paper systematically reviews and summarizes the progress of unsupervised monocular depth estimation in dynamic scenes based on deep learning.Firstly
the basic models of unsupervised monocular depth estimation based on deep learning are summarized
and how self-supervised constraints are applied between images is analyzed and explained. The basic framework diagram of unsupervised monocular depth estimation based on continuous frames is drawn. The impact of dynamic objects on images is explained from four aspects: epipolar lines
triangulation
fundamental matrix estimation
and reprojection error.Secondly
commonly used datasets and evaluation metrics for monocular depth estimation research are introduced. The KITTI and Cityscapes datasets provide continuous outdoor image data
while the NYU Depth V2 dataset provides indoor dynamic scene data
which are generally used for model training. The Make3D dataset has depth data but discontinuous images
which are generally used to test the generalization ability of the model. The algorithms are quantitatively analyzed using root mean square error (
RMSE
)
logarithmic root mean square error (
RMSE log
)
absolute relative error (
Abs Rel
)
squared relative error (
Sq Rel
)
and accuracies (
Acc
)
and the performance of classic monocular depth estimation models in dynamic scenes is compared and analyzed.Then
based on different ways of handling dynamic objects
the research directions of robust depth estimation in dynamic scenes and dynamic object tracking and depth estimation are summarized and analyzed. Dynamic objects are extracted and treated as outliers during training model to minimize their impact
training solely on static background information
which is referred to as robust depth estimation in dynamic scenes. Accurately distinguishing dynamic foreground and static background and processing the two regions separately is referred to as dynamic object tracking and depth estimation. Various algorithms for detecting and segmenting dynamic objects based on optical flow information
semantic information
and other information while estimating their motion are explained. At the same time
the advantages and disadvantages of each type of algorithm are summarized and analyzed based on commonly used evaluation criteria.Finally
the future development directions of monocular depth estimation in dynamic scenes are discussed from the aspects of network model optimization
online learning and generalization
real-time operation capability of embedded devices
and domain adaptation of unsupervised learning.
动态场景单目深度估计无监督学习深度学习三维重建
dynamic scenesmonocular depth estimationunsupervised learningdeep learning3D reconstruction
Bhat S F, Alhashim I and Wonka P.2021.Adabins: Depth estimation using adaptive bins.Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4009-4018.[DOI 10.48550/arXiv.2011.14141http://dx.doi.org/10.48550/arXiv.2011.14141]
Bian J W, Zhan H, Wang N, Li Z, Zhang L, Shen C, Cheng M and Reid I.2021.Unsupervised Scale-Consistent Depth Learning from Video. International Journal of Computer Vision, 2548–25 64.[DOI 10.1007/s11263-021-01484-6http://dx.doi.org/10.1007/s11263-021-01484-6]
Bian, J W, Zhan H, Wang N, Chin T J, Shen C and Reid I. 2021. Auto-Rectify Network for Unsupervised Indoor Depth Estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9802–9813. [DOI 10.1109/tpami.2021.3136220http://dx.doi.org/10.1109/tpami.2021.3136220]
Boulahbal H E, Voicila A, Comport A I.2022.Instance-Aware Multi-Object Self-Supervision for Monocular Depth Prediction.//IEEE Robotics and Automation Letters. 7(4): 10962-10968. [DOI 10.1109/LRA.2022.3194951http://dx.doi.org/10.1109/LRA.2022.3194951]
Casser V, Pirk S, Mahjourian R and Angelova A.2019.Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos.//Proceedings of the AAAI conference on artificial intelligence. 33(01): 8001-8008.[DOI 10.48550/arXiv.1811.06152http://dx.doi.org/10.48550/arXiv.1811.06152]
Cordts M, Omran M, Ramos S, Rehfeld T,Enzweiler M,Benenson R,Franke U,Roth S and Schiele B.2016.The cityscapes dataset for semantic urban scene understanding.//Proceedings of the IEEE conference on computer vision and pattern recognition.3213-3223[DOI 10.1109/CVPR.2016.350http://dx.doi.org/10.1109/CVPR.2016.350]
Dai Q, Patil V, Hecker S, Dai D, Gool LV, and Schindler K.2020.Self-supervised object motion and depth estimation from video.//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.1004-1005.[DOI 10.48550/arXiv.1912.04250http://dx.doi.org/10.48550/arXiv.1912.04250]
Dong X, Garratt M A, Anavatti S G, and Abbass H A. 2021. Towards Real-Time Monocular Depth Estimation for Robotics: A Survey.IEEE Transactions on Intelligent Transportation Systems, 23, 16940-16961.[DOI 10.48550/arXiv.2111.08600http://dx.doi.org/10.48550/arXiv.2111.08600]
Eigen D,and Fergus R.2014.Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-scale Convolutional Architecture. 2015 IEEE International Conference on Computer Vision (ICCV), 2650-2658.[DOI 10.1109/ICCV.2015.304http://dx.doi.org/10.1109/ICCV.2015.304]
Eigen D, Puhrsch C, and Fergus R.2014.Depth Map Prediction from a Single Image using a Multi-Scale Deep Network.Advances in neural information processing systems.[DOI 10.48550/arXiv.1406.2283http://dx.doi.org/10.48550/arXiv.1406.2283]
Feng Z, Yang L, Jing L, Wang H, Tian Y, and Li B. 2022. Disentangling Object Motion and Occlusion for Unsupervised Multi-frame Monocular Depth.European Conference on Computer Vision.[DOI 10.48550/ arXiv:2203.15174]
Gao F, Yu J, Shen H, Wang Y, and Yang H.2021. Attentional Separation-and-Aggregation Network for Self-supervised Depth-Pose Learning in Dynamic Scenes.Conference on Robot Learning.PMLR,2195-2205.[DOI 10.48550/arXiv:2011.09369http://dx.doi.org/10.48550/arXiv:2011.09369]
Gao X B, Shi X H, Ge Q F and Chen Q Y.2021. A review of visual SLAM for dynamic object scenes. Robot, 43(06): 733-750
高兴波,史旭华,葛群峰,陈奎烨. 2021. 面向动态物体场景的视觉SLAM综述. 机器人, 43(06): 733-750[DOI:10.13973 /j.cnki. robot.200323http://dx.doi.org/10.13973/j.cnki.robot.200323]
Garg R, Kumar B V, Carneiro G, and Reid I D.2016.Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue. European Conference on Computer Vision.Springer, Cham,740-756.[DOI 10.1007/978-3-319-46484-8_45http://dx.doi.org/10.1007/978-3-319-46484-8_45]
Gasperini S, Koch P N, Dallabetta V, Navab N, Busam B, and Tombari F.2021.R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of Dynamic Scenes. 2021 International Conference on 3D Vision (3DV),IEEE,751-760.[DOI 10.1109/3dv53792.2021.00084 ]
Geiger A, Lenz P and Urtasun R.2012.Are we ready for autonomous driving? the kitti vision benchmark suite.2012 IEEE conference on computer vision and pattern recognition. IEEE, 3354-3361.[DOI 10.1109/CVPR.2012.6248074http://dx.doi.org/10.1109/CVPR.2012.6248074]
Godard C, Mac Aodha O and Brostow G J.2017.Unsupervised monocular depth estimation with left-right consistency.Proceedings of the IEEE conference on computer vision and pattern recognition. 270-279.[DOI 10.48550/arXiv:1609.03677http://dx.doi.org/10.48550/arXiv:1609.03677]
Godard C, Mac Aodha O and Brostow G J.2018.Digging Into Self-Supervised Monocular Depth Estimation.2019 IEEE/CVF International Conference on Computer Vision (ICCV), 3827-3837.[DOI 10.48550/arXiv.1806.01260http://dx.doi.org/10.48550/arXiv.1806.01260]
Gordon A, Li H, Jonschkowski R and Angelova A.2019.Depth From Videos in the Wild: Unsupervised Monocular Depth Learning From Unknown Cameras.2019 IEEE/CVF International Conference on Computer Vision (ICCV), 8976-8985.[DOI 10.48550/arXiv.1904.04998http://dx.doi.org/10.48550/arXiv.1904.04998]
Guizilini V C, Ambrus R, Pillai S, and Gaidon A.2019.PackNet-SfM:3D Packing for Self-Supervised Monocular Depth Estimation.2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2482-2491.[DOI 10.48550/arXiv.1905.02693http://dx.doi.org/10.48550/arXiv.1905.02693]
Guizilini V C, Hou R, Li J, Ambru R and Gaidon A.2020.Semantically-Guided Representation Learning for Self-Supervised Monocular Depth. ArXiv, abs/2002.12319.[DOI 10.48550/arXiv.2002.12319http://dx.doi.org/10.48550/arXiv.2002.12319]
He K, Gkioxari G, Dollár P and Girshick R B.2017. Mask R-CNN.IEEE Transactions on Pattern Analysis and Machine Intelligence, 42, 386-397.[DOI 10.48550/ arXiv:1703.06870]
He K, Zhang X, Ren S and Sun J.2015.Deep Residual Learning for Image Recognition.2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770-778.[DOI 10.1109/CVPR.2016.90http://dx.doi.org/10.1109/CVPR.2016.90]
Hui T. 2022.RM-Depth: Unsupervised Learning of Recurrent Monocular Depth in Dynamic Scenes. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1665-1674.[DOI 10.1109/cvpr52688.2022.00172http://dx.doi.org/10.1109/cvpr52688.2022.00172]
Ilg E, Mayer N, Saikia T, Keuper M, Dosovitskiy A, and Brox T.2016.FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1647-1655.[DOI 10.1109/CVPR.2017.179http://dx.doi.org/10.1109/CVPR.2017.179]
Jiang J J, Li Z Y and Liu X M.2022. A review of monocular depth estimation methods based on Deep Learning . Chinese Journal of Computers,45(06):1276-1307.
江俊君,李震宇,刘贤明.2022.基于深度学习的单目深度估计方法综述[J].计算机学报,45(06):1276-1307.
Schönberger J L and Frahm J.2016.Structure-from-Motion Revisited.2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4104-4113.[DOI 10.1109/CVPR.2016.445http://dx.doi.org/10.1109/CVPR.2016.445]
Jung D, Choi J, Lee Y, Kim D, Kim C, Manocha D and Lee D.2021. DnD: Dense Depth Estimation in Crowded Dynamic Indoor Scenes. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 12777-12787.[DOI 10.48550/arXiv.2108.05615http://dx.doi.org/10.48550/arXiv.2108.05615]
Khan F, Salahuddin S and Javidnia H.2020. Deep learning-based monocular depth estimation methods—A state-of-the-art review. Sensors, 20(8): 2272.[DOI 10.3390/s20082272http://dx.doi.org/10.3390/s20082272]
Klingner M, Termöhlen J, Mikolajczyk J and Fingscheidt T.2020.Self-Supervised Monocular Depth Estimation: Solving the Dynamic Object Problem by Semantic Guidance.Computer Vision–ECCV 2020: 16th European Conference, Springer International Publishing,582-600.[DOI 10.1007/978-3-030-58565-5_35http://dx.doi.org/10.1007/978-3-030-58565-5_35]
Lee S, Im S, Lin S and Kweon I S.2021. Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency. AAAI Conference on Artificial Intelligence.35(3): 1863-1872.[DOI 10.1609/aaai.v35i3.16281http://dx.doi.org/10.1609/aaai.v35i3.16281]
Lee S, Rameau F, Pan F and Kweon I. 2021. Attentive and Contrastive Learning for Joint Depth and Motion Field Estimation. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 4842-4851.[DOI 10.1109/iccv48922.2021.00482http://dx.doi.org/10.1109/iccv48922.2021.00482]
Li H, Gordon A, Zhao H, Casser V, Anelia and Angelova. 2020. Unsupervised Monocular Depth Learning in Dynamic Scenes. Conference on Robot Learning. PMLR, 2021: 1908-1917.[DOI 10.48550/arXiv.2010.16404http://dx.doi.org/10.48550/arXiv.2010.16404]
Luo C, Yang Z, Wang P, Wang Y, Xu W, Nevatia R and Yuille A L.2019. Every Pixel Counts ++: Joint Learning of Geometry and Motion with 3D Holistic Understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(10): 2624-2641[DOI 10.1109/TPAMI.2019.2930258http://dx.doi.org/10.1109/TPAMI.2019.2930258]
Marr D and Poggio T A. 1979. A computational theory of human stereo vision. Proceedings of the Royal Society of London. Series B, Biological sciences, 204 (1156):301-28.[DOI 10.1007/978-1-4684-6775-8_11http://dx.doi.org/10.1007/978-1-4684-6775-8_11]
Mayer N, Ilg E, Häusser P, Fischer P, Cremers D, Dosovitskiy A and Brox T.2015. A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4040-4048.[DOI 10.1109/CVPR.2016.438http://dx.doi.org/10.1109/CVPR.2016.438]
Ming Y, Meng X, Fan C and Yu H.2021. Deep learning for monocular depth estimation: A review. Neurocomputing, 438, 14-33.[DOI 10.1016/j.neucom.2020.12.089http://dx.doi.org/10.1016/j.neucom.2020.12.089]
Ranftl R, Vineet V, Chen Q and Koltun V.2016. Dense Monocular Depth Estimation in Complex Dynamic Scenes. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4058-4066. [DOI 10.1109/cvpr.2016.440http://dx.doi.org/10.1109/cvpr.2016.440]
Ranjan A, Jampani V, Balles L, Kim K, Sun D, Wulff J and Black M J. 2018. Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12232-12241.[DOI 10.1109/cvpr.2019.01252http://dx.doi.org/10.1109/cvpr.2019.01252]
Ronneberger O, Fischer P and Brox T.2015.U-net: Convolutional networks for biomedical image segmentation.International Conference on Medical image computing and computer-assisted intervention. Springer, 234-241.[DOI 10.1007/978-3-319-24574-4_28http://dx.doi.org/10.1007/978-3-319-24574-4_28]
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M S, Berg A C and Fei-Fei L. 2014. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3): 211-252.[DOI 10.1007/s11263-015-0816-yhttp://dx.doi.org/10.1007/s11263-015-0816-y]
Russell C, Yu R, Agapito L.2014.Video pop-up: Monocular 3d reconstruction of dynamic scenes.European conference on computer vision. Springer, 583-598.[DOI 10.1007/978-3-319-10584-0_38http://dx.doi.org/10.1007/978-3-319-10584-0_38]
Saputra M R U, Markham A and Trigoni N.2018. Visual SLAM and structure from motion in dynamic environments: A survey. ACM Computing Surveys (CSUR), 51(2): 1-36.[DOI 10.1145/3177853http://dx.doi.org/10.1145/3177853]
Saunders K, Vogiatzis G and Manso L J.2022. Dyna-DM: Dynamic Object-aware Self-supervised Monocular Depth Maps. ArXiv, abs/2206.03799.[DOI 10.1145/3177853http://dx.doi.org/10.1145/3177853]
Saxena A, Sun M and Ng A Y.2008. Make3d: Learning 3d scene structure from a single still image. IEEE transactions on pattern analysis and machine intelligence. 31(5): 824-840.[DOI 10.1109/tpami.2008.132http://dx.doi.org/10.1109/tpami.2008.132]
Silberman N, Hoiem D, Kohli P and Fergus R.2012. Indoor Segmentation and Support Inference from RGBD Images. European Conference on Computer Vision.Springer, Berlin, Heidelberg, 746-760.[DOI 10.1007/978-3-642-33715-4_54http://dx.doi.org/10.1007/978-3-642-33715-4_54]
Sun L, Bian J W, Zhan H, Yin W, Reid I and Shen C. 2023. Sc-depthv3: Robust self-supervised monocular depth estimation for dynamic scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence. [DOI: 10.1109/TPAMI.2023.3322549http://dx.doi.org/10.1109/TPAMI.2023.3322549]
Ummenhofer B, Zhou H, Uhrig J, Mayer N, Ilg E, Dosovitskiy A and Brox T.2016. DeMoN: Depth and Motion Network for Learning Monocular Stereo. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5622-5631.[DOI 10.1109/cvpr.2017.596http://dx.doi.org/10.1109/cvpr.2017.596]
Varma A, Chawla H, Zonooz B and Arani E. 2022. Transformers in Self-Supervised Monocular Depth Estimation with Unknown Camera Intrinsics. ArXiv,abs/2202.03131.[DOI10.5220/0010884000003124]
Vaswani A, Shazeer N M, Parmar N, Uszkoreit J, Jones L, Gomez A N, Kaiser L and Polosukhin I. 2017. Attention is All you Need. ArXiv, abs/1706.03762.[DOI 10.48550/arXiv.1706.03762http://dx.doi.org/10.48550/arXiv.1706.03762]
Vijayanarasimhan S, Ricco S, Schmid C, Sukthankar R and Fragkiadaki K. 2017. SfM-Net: Learning of Structure and Motion from Video. ArXiv, abs/1704.07804.[DOI 10.48550/arXiv:1704.07804http://dx.doi.org/10.48550/arXiv:1704.07804]
Wang C, Buenaposada J M, Zhu R and Lucey S. 2017. Learning Depth from Monocular Videos Using Direct Methods. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022-2030.[DOI 10.1109/cvpr.2018.00216http://dx.doi.org/10.1109/cvpr.2018.00216]
Wang P, Shen X, Lin Z L, Cohen S D, Price B L and Yuille A L. 2015. Towards unified depth and semantic prediction from a single image. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2800-2809.[DOI 10.1109/cvpr.2015.7298897http://dx.doi.org/10.1109/cvpr.2015.7298897]
Wang R, Pizer S M, Frahm J M.2019.Recurrent neural network for (un-) supervised learning of monocular video visual odometry and depth.Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5555-5564.[DOI 10.1109/cvpr.2019.00570http://dx.doi.org/10.1109/cvpr.2019.00570]
Wimbauer F, Yang N, Stumberg L V, Zeller N and Cremers D. 2020. MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 6108-6118.[DOI 10.1109/cvpr46437.2021.00605http://dx.doi.org/10.1109/cvpr46437.2021.00605]
Xu H, Zheng J, Cai J and Zhang J. 2019. Region Deformer Networks for Unsupervised Depth Estimation from Unconstrained Monocular Videos. ArXiv, abs/1902.09907.[DOI 10.24963/ijcai.2019/788http://dx.doi.org/10.24963/ijcai.2019/788]
Yang D, Zhong X, Gu D, Peng X, Yang G and Zou C. 2020. Unsupervised learning of depth estimation, camera motion prediction and dynamic object localization from video. International Journal of Advanced Robotic Systems,17(2): 1434-1441.[DOI 10.1177/1729881420909653http://dx.doi.org/10.1177/1729881420909653]
Yin Z and Shi J. 2018. GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1983-1992.[DOI 10.1109/CVPR.2018.00212http://dx.doi.org/10.1109/CVPR.2018.00212]
Zhang S, Zhang J and Tao D. 2022. Towards Scale-Aware, Robust, and Generalizable Unsupervised Monocular Depth Estimation by Integrating IMU Motion Dynamics. European Conference on Computer Vision.Springer, Cham, 143-160.[DOI 10.1007/978-3-031-19839-7_9http://dx.doi.org/10.1007/978-3-031-19839-7_9]
Zhao C, Sun Q, Zhang C, Tang Y and Qian F. 2020. Monocular depth estimation based on deep learning: An overview. Science China Technological Sciences, 63(9): 1612-1627.
Zhou T, Brown M A, Snavely N and Lowe D G. 2017. Unsupervised Learning of Depth and Ego-Motion from Video. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 6612-6619.[DOI 10.48550/ arXiv:1704.07813]
相关作者
相关机构