{"id":970,"date":"2023-01-09T09:55:43","date_gmt":"2023-01-09T09:55:43","guid":{"rendered":"https:\/\/cpii-web-test01.azurewebsites.net\/?page_id=970"},"modified":"2025-10-22T11:10:25","modified_gmt":"2025-10-22T03:10:25","slug":"publications","status":"publish","type":"page","link":"https:\/\/cpii.hk\/zh\/publications\/","title":{"rendered":"\u5b78\u8853\u671f\u520a"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-page\" data-elementor-id=\"970\" class=\"elementor elementor-970\" data-elementor-post-type=\"page\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-334ec06 elementor-section-height-min-height elementor-section-boxed elementor-section-height-default elementor-section-items-middle wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no wpr-equal-height-no\" data-id=\"334ec06\" data-element_type=\"section\" data-e-type=\"section\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;,&quot;background_motion_fx_motion_fx_scrolling&quot;:&quot;yes&quot;,&quot;background_motion_fx_translateY_effect&quot;:&quot;yes&quot;,&quot;background_motion_fx_translateY_direction&quot;:&quot;negative&quot;,&quot;background_motion_fx_translateY_speed&quot;:{&quot;unit&quot;:&quot;px&quot;,&quot;size&quot;:6.5,&quot;sizes&quot;:[]},&quot;background_motion_fx_translateY_affectedRange&quot;:{&quot;unit&quot;:&quot;%&quot;,&quot;size&quot;:&quot;&quot;,&quot;sizes&quot;:{&quot;start&quot;:0,&quot;end&quot;:100}},&quot;background_motion_fx_devices&quot;:[&quot;desktop&quot;,&quot;tablet&quot;,&quot;mobile&quot;]}\">\n\t\t\t\t\t\t\t<div class=\"elementor-background-overlay\"><\/div>\n\t\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-fcb24fc\" data-id=\"fcb24fc\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-9e5dec7 elementor-nav-menu__align-center elementor-hidden-tablet elementor-hidden-mobile elementor-nav-menu--dropdown-tablet elementor-nav-menu__text-align-aside elementor-nav-menu--toggle elementor-nav-menu--burger elementor-widget elementor-widget-nav-menu\" data-id=\"9e5dec7\" data-element_type=\"widget\" data-e-type=\"widget\" data-settings=\"{&quot;layout&quot;:&quot;horizontal&quot;,&quot;submenu_icon&quot;:{&quot;value&quot;:&quot;&lt;i class=\\&quot;fas fa-caret-down\\&quot; aria-hidden=\\&quot;true\\&quot;&gt;&lt;\\\/i&gt;&quot;,&quot;library&quot;:&quot;fa-solid&quot;},&quot;toggle&quot;:&quot;burger&quot;}\" data-widget_type=\"nav-menu.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<nav aria-label=\"Menu\" class=\"elementor-nav-menu--main elementor-nav-menu__container elementor-nav-menu--layout-horizontal e--pointer-underline e--animation-fade\">\n\t\t\t\t<ul id=\"menu-1-9e5dec7\" class=\"elementor-nav-menu\"><li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-1813\"><a href=\"https:\/\/cpii.hk\" class=\"elementor-item\">Home<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1308\"><a href=\"https:\/\/cpii.hk\/zh\/about\/\" class=\"elementor-item\">About<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-has-children menu-item-1832\"><a href=\"https:\/\/cpii.hk\/zh\/rd-areas\/\" class=\"elementor-item\">R&#038;D Areas<\/a>\n<ul class=\"sub-menu elementor-nav-menu--dropdown\">\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1833\"><a href=\"https:\/\/cpii.hk\/zh\/rd-areas\/program-1\/\" class=\"elementor-sub-item\">Program 1 \u2013 Visual Intelligence<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1834\"><a href=\"https:\/\/cpii.hk\/zh\/rd-areas\/program-2\/\" class=\"elementor-sub-item\">Program 2 \u2013 Speech &#038; Language Intelligence<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1835\"><a href=\"https:\/\/cpii.hk\/zh\/rd-areas\/program-3\/\" class=\"elementor-sub-item\">Program 3 \u2013 Vision- and Language-based Healthcare AI<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1836\"><a href=\"https:\/\/cpii.hk\/zh\/rd-areas\/program-4\/\" class=\"elementor-sub-item\">Program 4 \u2013 Vision-based Urban Services<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1837\"><a href=\"https:\/\/cpii.hk\/zh\/rd-areas\/program-5\/\" class=\"elementor-sub-item\">Program 5 \u2013 AI-Enabled Design &#038; Automations<\/a><\/li>\n<\/ul>\n<\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-has-children menu-item-7750\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/\" class=\"elementor-item\">Inventions<\/a>\n<ul class=\"sub-menu elementor-nav-menu--dropdown\">\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-9949\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/ai-enhanced-weaving-based-freeform-sensing-interfaces\/\" class=\"elementor-sub-item\">AI-Based Personalized Design and Fabrication<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-7756\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/cpii-meeting-assistant-speech-to-summary\/\" class=\"elementor-sub-item\">CPII Meeting Assistant Speech to Summary<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-9107\"><a href=\"https:\/\/cpii.hk\/zh\/cpii-chatdoc-master\/\" class=\"elementor-sub-item\">CPII ChatDoc Master<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-8141\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/dysarthric-speech-reconstruction\/\" class=\"elementor-sub-item\">Dysarthric Speech Reconstruction<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-7753\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/edge-ai-technologies-for-infrastructure-assisted-autonomous-driving\/\" class=\"elementor-sub-item\">Edge AI Technologies For Infrastructure-Assisted Autonomous Driving<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-7754\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/femtosecond-projection-nanoprinter\/\" class=\"elementor-sub-item\">Femtosecond Projection Nanoprinter<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-9950\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/flashlight-ai-search-on-video\/\" class=\"elementor-sub-item\">FlashLight: AI Search on Video<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-8142\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/information-seeking-conversational-system\/\" class=\"elementor-sub-item\">Information-seeking Conversational System<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-9947\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/motion-i2v-consistent-and-controllable-image-to-video-generation-with-explicit-motion-modeling\/\" class=\"elementor-sub-item\">Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit\u00a0Motion\u00a0Modeling<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-7752\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/smart-augmentative-and-alternative-communication\/\" class=\"elementor-sub-item\">Smart Augmentative and Alternative Communication<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-9948\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/the-first-controllable-cantonese-text-to-personalized-speech-generation-technology\/\" class=\"elementor-sub-item\">The First Controllable Cantonese Text-to-Personalized Speech Generation Technology<\/a><\/li>\n<\/ul>\n<\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1596\"><a href=\"https:\/\/cpii.hk\/zh\/news\/\" class=\"elementor-item\">News<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1824\"><a href=\"https:\/\/cpii.hk\/zh\/publications\/\" class=\"elementor-item\">Publications<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-4562\"><a href=\"https:\/\/cpii.hk\/zh\/join-us-2\/\" class=\"elementor-item\">Join Us<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1330\"><a href=\"https:\/\/cpii.hk\/zh\/contact-us\/\" class=\"elementor-item\">Contact Us<\/a><\/li>\n<\/ul>\t\t\t<\/nav>\n\t\t\t\t\t<div class=\"elementor-menu-toggle\" role=\"button\" tabindex=\"0\" aria-label=\"Menu Toggle\" aria-expanded=\"false\">\n\t\t\t<i aria-hidden=\"true\" role=\"presentation\" class=\"elementor-menu-toggle__icon--open eicon-menu-bar\"><\/i><i aria-hidden=\"true\" role=\"presentation\" class=\"elementor-menu-toggle__icon--close eicon-close\"><\/i>\t\t<\/div>\n\t\t\t\t\t<nav class=\"elementor-nav-menu--dropdown elementor-nav-menu__container\" aria-hidden=\"true\">\n\t\t\t\t<ul id=\"menu-2-9e5dec7\" class=\"elementor-nav-menu\"><li class=\"menu-item menu-item-type-custom menu-item-object-custom menu-item-1813\"><a href=\"https:\/\/cpii.hk\" class=\"elementor-item\" tabindex=\"-1\">Home<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1308\"><a href=\"https:\/\/cpii.hk\/zh\/about\/\" class=\"elementor-item\" tabindex=\"-1\">About<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-has-children menu-item-1832\"><a href=\"https:\/\/cpii.hk\/zh\/rd-areas\/\" class=\"elementor-item\" tabindex=\"-1\">R&#038;D Areas<\/a>\n<ul class=\"sub-menu elementor-nav-menu--dropdown\">\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1833\"><a href=\"https:\/\/cpii.hk\/zh\/rd-areas\/program-1\/\" class=\"elementor-sub-item\" tabindex=\"-1\">Program 1 \u2013 Visual Intelligence<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1834\"><a href=\"https:\/\/cpii.hk\/zh\/rd-areas\/program-2\/\" class=\"elementor-sub-item\" tabindex=\"-1\">Program 2 \u2013 Speech &#038; Language Intelligence<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1835\"><a href=\"https:\/\/cpii.hk\/zh\/rd-areas\/program-3\/\" class=\"elementor-sub-item\" tabindex=\"-1\">Program 3 \u2013 Vision- and Language-based Healthcare AI<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1836\"><a href=\"https:\/\/cpii.hk\/zh\/rd-areas\/program-4\/\" class=\"elementor-sub-item\" tabindex=\"-1\">Program 4 \u2013 Vision-based Urban Services<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1837\"><a href=\"https:\/\/cpii.hk\/zh\/rd-areas\/program-5\/\" class=\"elementor-sub-item\" tabindex=\"-1\">Program 5 \u2013 AI-Enabled Design &#038; Automations<\/a><\/li>\n<\/ul>\n<\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-has-children menu-item-7750\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/\" class=\"elementor-item\" tabindex=\"-1\">Inventions<\/a>\n<ul class=\"sub-menu elementor-nav-menu--dropdown\">\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-9949\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/ai-enhanced-weaving-based-freeform-sensing-interfaces\/\" class=\"elementor-sub-item\" tabindex=\"-1\">AI-Based Personalized Design and Fabrication<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-7756\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/cpii-meeting-assistant-speech-to-summary\/\" class=\"elementor-sub-item\" tabindex=\"-1\">CPII Meeting Assistant Speech to Summary<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-9107\"><a href=\"https:\/\/cpii.hk\/zh\/cpii-chatdoc-master\/\" class=\"elementor-sub-item\" tabindex=\"-1\">CPII ChatDoc Master<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-8141\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/dysarthric-speech-reconstruction\/\" class=\"elementor-sub-item\" tabindex=\"-1\">Dysarthric Speech Reconstruction<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-7753\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/edge-ai-technologies-for-infrastructure-assisted-autonomous-driving\/\" class=\"elementor-sub-item\" tabindex=\"-1\">Edge AI Technologies For Infrastructure-Assisted Autonomous Driving<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-7754\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/femtosecond-projection-nanoprinter\/\" class=\"elementor-sub-item\" tabindex=\"-1\">Femtosecond Projection Nanoprinter<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-9950\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/flashlight-ai-search-on-video\/\" class=\"elementor-sub-item\" tabindex=\"-1\">FlashLight: AI Search on Video<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-8142\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/information-seeking-conversational-system\/\" class=\"elementor-sub-item\" tabindex=\"-1\">Information-seeking Conversational System<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-9947\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/motion-i2v-consistent-and-controllable-image-to-video-generation-with-explicit-motion-modeling\/\" class=\"elementor-sub-item\" tabindex=\"-1\">Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit\u00a0Motion\u00a0Modeling<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-7752\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/smart-augmentative-and-alternative-communication\/\" class=\"elementor-sub-item\" tabindex=\"-1\">Smart Augmentative and Alternative Communication<\/a><\/li>\n\t<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-9948\"><a href=\"https:\/\/cpii.hk\/zh\/inventions\/the-first-controllable-cantonese-text-to-personalized-speech-generation-technology\/\" class=\"elementor-sub-item\" tabindex=\"-1\">The First Controllable Cantonese Text-to-Personalized Speech Generation Technology<\/a><\/li>\n<\/ul>\n<\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1596\"><a href=\"https:\/\/cpii.hk\/zh\/news\/\" class=\"elementor-item\" tabindex=\"-1\">News<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1824\"><a href=\"https:\/\/cpii.hk\/zh\/publications\/\" class=\"elementor-item\" tabindex=\"-1\">Publications<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-4562\"><a href=\"https:\/\/cpii.hk\/zh\/join-us-2\/\" class=\"elementor-item\" tabindex=\"-1\">Join Us<\/a><\/li>\n<li class=\"menu-item menu-item-type-post_type menu-item-object-page menu-item-1330\"><a href=\"https:\/\/cpii.hk\/zh\/contact-us\/\" class=\"elementor-item\" tabindex=\"-1\">Contact Us<\/a><\/li>\n<\/ul>\t\t\t<\/nav>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-7236efb elementor-hidden-mobile elementor-widget elementor-widget-spacer\" data-id=\"7236efb\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5b30ed0 elementor-widget elementor-widget-spacer\" data-id=\"5b30ed0\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d0aa8f8 elementor-widget elementor-widget-heading\" data-id=\"d0aa8f8\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"><span class=\"elementor-heading-title elementor-size-default elementor-inline-editing pen\" data-elementor-setting-key=\"title\" id=\"ep-b312652\" data-pen-placeholder=\"Type Here...\">PUBLICATIONS<\/span><\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ee361bb elementor-widget elementor-widget-spacer\" data-id=\"ee361bb\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-5df0703 elementor-hidden-mobile elementor-widget elementor-widget-spacer\" data-id=\"5df0703\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-eeb0b04 elementor-section-boxed elementor-section-height-default elementor-section-height-default wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no wpr-equal-height-no\" data-id=\"eeb0b04\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-bcb9553\" data-id=\"bcb9553\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-7710edf elementor-section-boxed elementor-section-height-default elementor-section-height-default wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no wpr-equal-height-no\" data-id=\"7710edf\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-5026f91\" data-id=\"5026f91\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-84b5f7e elementor-widget elementor-widget-text-editor\" data-id=\"84b5f7e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>The following publications are sorted by the English surnames of our Principal Investigators<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-e32689b elementor-section-boxed elementor-section-height-default elementor-section-height-default wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no wpr-equal-height-no\" data-id=\"e32689b\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-dbf5379\" data-id=\"dbf5379\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-9e4a6e7 elementor-widget elementor-widget-heading\" data-id=\"9e4a6e7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Professor Rosanna Yuen-Yan CHAN<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-4c84c7c\" data-id=\"4c84c7c\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-4061829 elementor-widget elementor-widget-text-editor\" data-id=\"4061829\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10870110\">R. Yuen-Yan Chan, &#8220;EEG Transformer for Classifying Students\u2019 Epistemic Cognition States in Educational Contexts,&#8221; in IEEE Access, vol. 13, pp. 23935-23949, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/2024.ieee-isit.org\/recent-results-session\">R. Yuen-Yan Chan, &#8220;Towards Rate-Distortion Analysis in Symbol-Based Assistive Communication,&#8221; in ISIT 2024, Jul. 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><div><a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/30519\"><span lang=\"EN-US\">C. Tong, R. Chan, \u201cGaze-Based Interaction Adaptation for People with Involuntary Head Movements (Student Abstract),\u201d In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, no. 21, pp. 23669-23670, 2024.<\/span><\/a><\/div><\/li><\/ul><p>\u00a0<\/p><ul><li><div><a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/10159389\" target=\"_blank\" rel=\"noopener\">R. Y. -Y. Chan, C. M. V. Wong and Y. N. Yum, &#8220;Predicting Behavior Change in Students With Special Education Needs Using Multimodal Learning Analytics,&#8221; In IEEE Access, vol. 11, pp. 63238-63251, 2023.<\/a><\/div><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www.mdpi.com\/1424-8220\/21\/19\/6693\" target=\"_blank\" rel=\"noopener\">\u200bC. M. V. Wong, R. Y.Y. Chan, Y. N. Yum, K.Wang. \u201cInternet of Things (IoT)-Enhanced Applied Behavior Analysis (ABA) for Special Education Needs,\u201d Sensors 2021, 21(19), 6693, 16 pages. 2021.<\/a><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-c449228 elementor-section-boxed elementor-section-height-default elementor-section-height-default wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no wpr-equal-height-no\" data-id=\"c449228\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-07ab540\" data-id=\"07ab540\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-94e936a elementor-widget elementor-widget-heading\" data-id=\"94e936a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Professor Shih-Chi CHEN<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-f29e0ec\" data-id=\"f29e0ec\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-7043542 elementor-widget elementor-widget-text-editor\" data-id=\"7043542\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul><li><a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/full\/10.1002\/lpor.202500047\">M. Ye, Y. Lei, S. Gu, Y. Wang, S. Chen, \u201cColloidal Silver Nanoparticles-assisted High-precision Parallel Laser Writing on Glass and Optical Crystals,\u201d Laser &amp; Photonics Reviews, pp. e00047, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www.nature.com\/articles\/s44221-025-00447-2#citeas\">C. Liu, X. Yan, S. Li, M. Mohammadnejad, B. Deng, N.X. Fang, Y. Habibi, S. Chen, X. Zhao,\u201cA Metre-scale Vertical Origami Hydrogel Panel for Atmospheric Water Harvesting in Death Valley,\u201d Nature Water, Vol. 3, pp. 714-722, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/advanced.onlinelibrary.wiley.com\/doi\/10.1002\/adma.202312915\">S. Gu, B. Chen, X. Xu, F. Han, S. Chen, \u201c3D Nanofabrication via Directed Material Assembly: Mechanism, Method, and Future,\u201d Advanced Materials, pp. 2312915, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/iopscience.iop.org\/article\/10.1149\/1945-7111\/ad4a0d\">J. Arab, P. Sharma, S. Chen, \u201cFabrication of Micro-holes in PMMA using Micro-ECDM Process: Geometric Characteristics and EC Discharge Behaviour,\u201d Journal of the Electrochemical Society, Vol. 171, No. 5, pp. 053506, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/opg.optica.org\/directpdfaccess\/6607aa52-8d9f-4589-8da91b1cdbecb3a1_549382\/boe-15-5-3382.pdf?da=1&amp;id=549382&amp;seq=0&amp;mobile=no\">Y. Wang, Z. Bi, Y. Song, L. Duan, S. C. Chen, \u201cSelective Activation of Photoactivatable Fluorescent Protein based on Binary Holography,\u201d Biomedical Optics Express, Vol.15, no. 5, pp. 3382-3393, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/acrobat.adobe.com\/id\/urn:aaid:sc:ap:f27c53d8-1deb-4249-8727-2dca0106032f\">S. Gu, F. Han, M. Ye, S. C. Chen, \u201cMulti-material 3D Nanofabrication based on Ultrafast Lasers and Swellable Hydrogels,\u201d ASPE 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/acrobat.adobe.com\/id\/urn:aaid:sc:AP:d0008274-ac6e-43d9-ba0c-22f301293c14\">Z. Deng, S. Gu, Z. Feng, S. C. Chen, \u201cHigh-throughput Maskless Optical Lithography System based on Digital Holography,\u201d ASPE 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/acrobat.adobe.com\/id\/urn:aaid:sc:ap:47f3fcbd-9527-4b81-b1df-f335f29a2dbd\">B. Ding, C. Li and S. C. Chen, \u201cDesign and Characterization of an XY\u03b8Z Nanopositioner,\u201d ASPE 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/pubs.aip.org\/aip\/rsi\/article-abstract\/94\/10\/101502\/2915714\/A-survey-on-the-mechanical-design-for-piezo?redirectedFrom=fulltext\">B. Ding, X. Li, C. Li, Y. Li, S. C. Chen, \u201cA Survey on the Mechanical Design for Piezo-actuated Compliant Micro-positioning Stages,\u201d Review of Scientific Instruments, Vol. 94, pp. 101502, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><div><a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0030399223005303\"><span lang=\"EN-US\">T. Singh, J. Arab, S. C. Chen, \u201cImprovement on Surface Quality of Inconel-718 Slits via Laser Cutting and Wire Electrochemical Machining Processes,\u201d Optics and Laser Technology, Vol. 167, pp. 109637, 2023<\/span><\/a><\/div><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/www.nature.com\/articles\/s41467-023-37163-y\">W. Ouyang, X. Xu, W. Lu, N. Zhao, F. Han, and S. Chen,\u201cUltrafast 3D Nanofabrication via Digital Holography,\u201d Nature Communications, Vol. 14, pp. 1716, 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0141635923000065\">C. Li and S. Chen, \u201cDesign of Compliant Mechanisms based on Compliant Building Elements. Part I: Principles,\u201d Precision Engineering, Vol. 81, pp. 207-220, 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0141635923000053\">C. Li and S. Chen, \u201cDesign of Compliant Mechanisms based on Compliant Building Elements. Part II: Practice,\u201d Precision Engineering, Vol. 81, pp. 8-21, 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/www.science.org\/stoken\/author-tokens\/ST-930\/full\">F. Han, S. Gu, A. Klimas, N. Zhao, Y. Zhao, and S. Chen, \u201c3D Nanofabrication via Ultrafast Laser Patterning and Kinetically-regulated Material Assembly,\u201d Science, Vol. 378, No. 6626, pp. 1325-1331, 2022.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/opg.optica.org\/ol\/abstract.cfm?uri=ol-47-13-3287\">X. Li, W. Liu, F. Goudail, and S. Chen, \u201cOptimal Nonlinear Stokes-Mueller Polarimetry for Multi-photon Processes,\u201d Optics Letters, Vol. 47, No. 13, pp. 3287-3290, 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/opg.optica.org\/ol\/abstract.cfm?uri=ol-47-11-2854&amp;origin=search\">X. Li, J. Xu, L. Zhang, H. Hu, and S. Chen, \u201cUnderwater Image Restoration via Stokes Decomposition,\u201d Optics Letters, Vol. 47, No. 11, pp. 2854-2857, 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/pubs.rsc.org\/en\/content\/articlelanding\/2022\/tc\/d1tc05746d\" target=\"_blank\" rel=\"noopener\">S. Yang, F. Li, M.M. Gong, L. Zhang, Z.W. Zhu, H.B. Shen, S.C Chen. \u201cGeneration of Q-switched and Mode-locked Pulses based on PbS\/CdS Saturable Absorber in an Er-doped Fiber Laser,\u201d Journal of Material Chemistry C, 10: 5956-5961. 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/www.light-am.com\/en\/article\/doi\/10.37188\/lam.2022.026\" target=\"_blank\" rel=\"noopener\">X. Li, W. Lu, X. Xu, Y. Wang, S.C. Chen. \u201cAdvanced Optical Methods and Materials for Fabricating 3D Tissue Scaffolds,\u201d Light: Advanced Manufacturing, 3: 26. 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/opg.optica.org\/ol\/abstract.cfm?uri=ol-47-6-1415\" target=\"_blank\" rel=\"noopener\">X. Li, F. Goudail, S.C. Chen. \u201cSelf-calibration for Mueller Polarimeters based on DoFP Polarization Imagers,\u201d Optics Letters, 47(6): 1415-1418. 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0143816621000749\" target=\"_blank\" rel=\"noopener\">D. Chen, S. Gu, and S. C. Chen, \u201cStudy of Optical Modulation based on Binary Masks with Finite Pixels,\u201d Opt. Lasers Eng., vol. 142, no. 106604, 2021.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li class=\"p1\"><a href=\"https:\/\/opg.optica.org\/ol\/abstract.cfm?uri=ol-47-5-1065\" target=\"_blank\" rel=\"noopener\">X. Liu, X. Li, S.C. Chen. \u201cEnhanced Polarization Demosaicking Network via a Precise Angle of Polarization Loss Calculation Method,\u201d Optics Letters, 47(5): 1065-1068. 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/opg.optica.org\/oe\/fulltext.cfm?uri=oe-29-26-44250&amp;id=466137\" target=\"_blank\" rel=\"noopener\">M. Ren, W. Lu, Q. Shao, F. Han, W. Ouyang, T. Zhang, C. C.L. Wang, and S.C. Chen. \u201cAberration-free Large-area Stitch-free 3D Nano-printing based on Binary Holography,\u201d Optics Express, 29(26): 44250-263. 2021.<\/a><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-cb7aac1 elementor-section-boxed elementor-section-height-default elementor-section-height-default wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no wpr-equal-height-no\" data-id=\"cb7aac1\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-cfcfd00\" data-id=\"cfcfd00\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-c3d5018 elementor-widget elementor-widget-heading\" data-id=\"c3d5018\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Professor Hongsheng LI<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-c57e0b9\" data-id=\"c57e0b9\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-f3ecb5e elementor-widget elementor-widget-text-editor\" data-id=\"f3ecb5e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul><li><a href=\"https:\/\/dl.acm.org\/doi\/10.1007\/978-3-031-73033-7_3\">Z. Lin, D. Liu, R. Zhang, P. Gao, L. Qiu, H. Xiao, H. Qiu, W. Shao, K. Chen, J. Han, S. Huang, Y. Zhang, X. He, Y. Qiao, H. Li, &#8220;SPHINX: A Mixer of Weights, Visual Embeddings and Image Scales for Multi-modal Large Language Models.&#8221; In Computer Vision \u2013 ECCV 2024: 18th European Conference, Milan, Italy, September 29\u2013October 4, 2024, Proceedings, Part LXII. Springer-Verlag, Berlin, Heidelberg, 36\u201355.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/32227\">X. Chen, S. Shi, T. Ma, J. Zhou, S. See, K. C. Cheung, H. Li, \u201cM3Net: Multimodal Multi-task Learning for 3D Detection, Segmentation, and Occupancy Prediction in Autonomous Driving\u201d, AAAI, vol. 39, no. 2, pp. 2275-2283, Apr, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/iclr.cc\/virtual\/2025\/poster\/30134\">D. Jiang, R. Zhang, Z. Guo, Y. Wu, J. Lei, P. Qiu, P. Lu, Z. Chen, C. Fu, G. Song, P. Gao, Y. Liu, C. Li, H. Li, &#8220;MMSearch: Unveiling the Potential of Large Models as Multi-modal Search Engines,&#8221; in ICLR 2025, Apr, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/iclr.cc\/virtual\/2025\/poster\/30398\">P. Gao, L. Zhuo, D. Liu, R. Du, X. Luo, L. Qiu, Y. Zhang, C. Lin, R. Huang, S. Geng, R. Zhang, J. Xi, W. Shao, Z. Jiang, T. Yang, W. Ye, H. Tong, J. He, Y. Qiao, H. Li, &#8220;Lumina-T2X: Scalable Flow-based Large Diffusion Transformer for Flexible Resolution Generation,&#8221; in ICLR 2025, Apr, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/11094098\">X. Ju and H. Li, &#8220;DirectTriGS: Triplane-based Gaussian Splatting Field Representation for 3D Generation,&#8221; 2025 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2025, pp. 16229-16239.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/11093572\">W. Bian, Z. Huang, X. Shi, Y. Li, F. -Y. Wang and H. Li, &#8220;GS-DiT: Advancing Video Generation with Dynamic 3D Gaussian Fields through Efficient Dense 3D Point Tracking,&#8221; 2025 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2025, pp. 21717-21727.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/iclr.cc\/virtual\/2025\/poster\/29918\">R. Zhang, X. Wei, D. Jiang, Z. Guo, S. Li, Y. Zhang, C. Tong, J. Liu, A. Zhou, B. Wei, S. Zhang, P. Gao, C. Li, H. Li, &#8220;MAVIS: Mathematical Visual Instruction Tuning with an Automatic Data Engine,&#8221; in ICLR 2025, Apr, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/11094576\">X. Chen, L. Huang, T. Ma, R. Fang, S. Shi and H. Li, &#8220;SOLVE: Synergy of Language-Vision and End-to-End Networks for Autonomous Driving,&#8221; 2025 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2025, pp. 12068-12077.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/icml.cc\/virtual\/2025\/poster\/43759\">X. Yang, L. Xu, H. Li, S. Zhang, &#8220;One Leaf Reveals the Season: Occlusion-Based Contrastive Learning with Semantic-Aware Views for Efficient Visual Representation,&#8221; ICML 2025, Jul, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10614624\">X. Yang, L. Xu, S. Yu, Q. Xia, H. Li and S. Zhang, &#8220;Segmentation and Vascular Vectorization for Coronary Artery by Geometry-Based Cascaded Neural Network,&#8221; in IEEE Transactions on Medical Imaging, vol. 44, no. 1, pp. 259-269, Jan. 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/papers.miccai.org\/miccai-2025\/0999-Paper2280.html\">L. Wei, C. Liu, Z. Zhang, W. Zhang, S. Zhang, H. Li, &#8220;VBCD: A Voxel-Based Framework for Personalized Dental Crown Design,&#8221; in MICCAI 2025, Sep, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.iclr.cc\/paper_files\/paper\/2024\/hash\/c196239c5f9481e0db2755f31fe4585f-Abstract-Conference.html\">R. Zhang, J. Han, C. Liu, A. Zhou, P. Lu, Y. Qiao, H. Li, G. Peng, &#8220;LLaMA-Adapter: Efficient Fine-tuning of Large Language Models with Zero-initialized Attention,&#8221; in ICLR 2025, May, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www.computer.org\/csdl\/proceedings-article\/cvpr\/2024\/530000p120\/20hTwMLMwr6\">H. Shao, Y. Hu, L. Wang, G. Song, S. L. Waslander, Y. Liu, H. Li, &#8220;LMDrive: Closed-Loop End-to-End Driving with Large Language Models,&#8221; in 2024 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2024, pp. 15120-15130.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www.computer.org\/csdl\/proceedings-article\/cvpr\/2024\/530000e526\/20hRw9PffUs\">X. Ju, Z. Huang, Y. Li, G. Zhang, Y. Qiao, H. Li, &#8220;DiffInDScene: Diffusion-Based High-Quality 3D Indoor Scene Generation,&#8221; in 2024 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2024, pp. 4526-4535.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1007\/978-3-031-72784-9_9\">F. Wang, X. Wu, Z. Huang, X. Shi, D. Shen, G. Song, Y. Liu, H. Li, &#8220;Be-Your-Outpainter: Mastering Video Outpainting Through Input-Specific Adaptation.&#8221; In Computer Vision \u2013 ECCV 2024: 18th European Conference, Milan, Italy, September 29\u2013October 4, 2024, Proceedings, Part XLIV. Springer-Verlag, Berlin, Heidelberg, 153\u2013168.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/dl.acm.org\/doi\/10.1007\/978-3-031-73254-6_12\">L. Huang, R. Fang, A. Zhang, G. Song, S. Liu, Y. Liu, H. Li, &#8220;FouriScale: A Frequency Perspective on Training-Free High-Resolution Image Synthesis.&#8221; In Computer Vision \u2013 ECCV 2024: 18th European Conference, Milan, Italy, September 29\u2013October 4, 2024, Proceedings, Part XII. Springer-Verlag, Berlin, Heidelberg, 196\u2013212.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/hash\/98a29475083c502c34949f9baa1aa2ef-Abstract-Conference.html\">F. Wang, Z. Huang, A. W. Bergman, D. Shen, P. Gao, M. Lingelbach, K. Sun, W. Bian, G. Song, Y. Liu, X. Wang, H. Li, &#8220;Phased Consistency Models.&#8221; In Advances in Neural Information Processing Systems 37 (2024), Dec, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/hash\/ad0edc7d5fa1a783f063646968b7315b-Abstract-Datasets_and_Benchmarks_Track.html\">K. Wang, J. Pan, W. Shi, Z. Lu, H. Ren, A. Zhou, M. Zhan, H. Li, Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset.&#8221; In Advances in Neural Information Processing Systems 37 (2024), Dec, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/hash\/8b54ecd9823fff6d37e61ece8f87e534-Abstract-Conference.html\">D. Jiang, G. Song, X. Wu, R. Zhang, D. Shen, Z. Zong, Y. Liu, H. Li, &#8220;CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching.&#8221; In Advances in Neural Information Processing Systems 37 (2024), Dec, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/iclr.cc\/virtual\/2025\/poster\/31210\">Z. Lu, A. Zhou, K. Wang, H. Ren, W. Shi, J. Pan, M. Zhan, H. Li, &#8220;MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code,&#8221; in ICLR 2025, Apr, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/iclr.cc\/virtual\/2025\/poster\/32065\">F. Wang, Y. Shui, J. Piao, K. Sun, H. Li, &#8220;Diffusion-NPO: Negative Preference Optimization for Better Preference Aligned Generation of Diffusion Models,&#8221; in ICLR 2025, Apr, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/iclr.cc\/virtual\/2025\/poster\/28415\">F. Wang, L. Yang, Z. Huang, M. Wang, H. Li, &#8220;Rectified Diffusion: Straightness Is Not Your Need in Rectified Flow,&#8221; in ICLR 2025, Apr, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/iclr.cc\/virtual\/2025\/poster\/29098\">W. Lin, X. Wei, R. An, P. Gao, B. Zou, Y. Luo, S. Huang, S. Zhang, H. Li, &#8220;Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want&#8221; in ICLR 2025, Apr, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/iclr.cc\/virtual\/2025\/poster\/32047\">W. Lin, X. Wei, R. Zhang, L. Zhuo, S. Zhao, S. Huang, J. Xie, P. Gao, H. Li, &#8220;PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions,&#8221; in ICLR 2025, Apr, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10380457\">F. Hong, L. Kong, H. Zhou, X. Zhu, H. Li, Z. Liu, \u201cUnified 3D and 4D Panoptic Segmentation via Dynamic Shifting Networks,\u201d IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 46, no. 5, pp. 3480-3495, 2024<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www.nature.com\/articles\/s41698-024-00577-y#citeas\">F. Yan, Q. Da, H. Yi, S. Deng, L. Zhu, Y. Liu, M. Feng, J. Wang, X. Wang, Y. Zhang, W. Zhang, X. Zhang, J. Lin, S. Zhang, C. Wang, &#8220;Artificial Intelligence-based Assessment of PD-L1 Expression in Diffuse Large B Cell Lymphoma,&#8221; npj Precision Oncology, Vol. no. 76, 2024<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openreview.net\/pdf?id=6Gzkhoc6YS\">R. Zhang, Z. Jiang, Z. Guo, S. Yan, J. Pan, H. Dong, Y. Qiao, P. Gao, H. Li, \u201cPersonalize Segment Anything Model with One Shot,\u201d ICLR 2024, 2024<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openreview.net\/forum?id=d4UiXAHN2W\">R. Zhang, J. Han, C. Liu, A. Zhou, P. Lu, Y. Qiao, H. Li, P. Gao. &#8220;LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-initialized Attention,&#8221; ICLR, 2024<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/www.computer.org\/csdl\/journal\/tp\/2024\/02\/10375777\/1TfB8znxdlK\">K. Sun, S. Wu, N. Zhang, Z. Huang, Q. Wang, H. Li, \u201cCGOF++: Controllable 3D Face Synthesis with Conditional Generative Occupancy Fields,\u201d IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 46, no. 02, pp. 913-926, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2023\/file\/ad2fa437f7c23e4e9875599c6065d18a-Paper-Conference.pdf\">W. Bian, Z. Huang, X. Shi, Y. Dong, Y. Li, H. Li, \u201cContext-PIPs: Persistent Independent Particles Demands Context Features,\u201d NeurlIPS 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/openreview.net\/pdf?id=1AEYVVNfvvx\">X. Yang, L. Xu, S. Yu, Q. Xia, H. Li, S. Zhang, \u201cGeometry-Based End-to-End Segmentation of Coronary Artery in Computed Tomography Angiography,\u201d ICLR 2023 Workshop TML4H, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/supplemental\/Lu_Urban_Radiance_Field_ICCV_2023_supplemental.pdf\">F. Lu, Y. Xu, G. Chen, H. Li, K. Y. Lin, C. Jiang, \u201cUrban Radiance Field Representation with Deformable Neural Mesh Primitives,\u201d ICCV 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Fan_Simulating_Fluids_in_Real-World_Still_Images_ICCV_2023_paper.pdf\">S. Fan, J. Piao, C. Qian, H. Li, K. Y. Lin, \u201cSimulating Fluids in Real-world Still Images,\u201d ICCV 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><div><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2023\/file\/9bf0810a4a1597a36d27ceea58667d92-Paper-Conference.pdf\"><span lang=\"EN-US\">Y. Zhang, X. Shi, D. Li, X. Wang, J. Wang, H. Li, \u201cA Unified Conditional Framework for Diffusion-based Image Restoration,\u201d NeurlIPS 2023, 2023<\/span><\/a><\/div><\/li><\/ul><p>\u00a0<\/p><ul><li><div><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2023\/file\/9bc59aff4685e39e1a8175d5303248a1-Paper-Datasets_and_Benchmarks.pdf\"><span lang=\"EN-US\">K. Sun, J. Pan, Y. Ge, H. Li, H. Duan, X. Wu, R. Zhang, A. Zhou, Z. Qin, Y. Wang, J. Dai, Y. Qiao, L. Wang, H. Li, \u201cJourneyDB: A Benchmark for Generative Image Understanding,\u201d NeurlIPS 2023, 2023<\/span><\/a><\/div><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2023\/file\/765043fe026f7d704c96cec027f13843-Paper-Datasets_and_Benchmarks.pdf\">Y. Niu, Y. Pu, Z. Yang, X. Li, T. Zhou, J. Ren, S. Hu, H. Li, Y. Liu, &#8220;LightZero: A Unified Benchmark for the Monte Carlo Tree Search in General Sequential Decision Scenarios,&#8221; NeurlIPS 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/10264211\/citations#citations\">L. Huang, K. Lu, G. Song, L. Wang, S. Liu, Y. Liu, H. Li, &#8220;Teach-DETR: Better Training DETR with Teachers,&#8221; IEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 45, no. 12, pp. 15759-15771, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Wu_Human_Preference_Score_Better_Aligning_Text-to-Image_Models_with_Human_Preference_ICCV_2023_paper.pdf\">X. Wu, K. Sun, F. Zhu, R. Zhao, H. Li, &#8220;Human Preference Score: Better Aligning Text-to-Image Models with Human Preference,&#8221; ICCV, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Ma_DetZero_Rethinking_Offboard_3D_Object_Detection_with_Long-term_Sequential_Point_ICCV_2023_paper.pdf\">T. Ma, X. Yang, H. Zhou, X. Li, B. Shi, J. Liu, Y. Yang, Z. Liu, L. He, Y. Qiao, Y. Li, H. Li, &#8220;DetZero: Rethinking Offboard 3D Object Detection with Long-term Sequential Point Clouds,&#8221; ICCV, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Shi_VideoFlow_Exploiting_Temporal_Cues_for_Multi-frame_Optical_Flow_Estimation_ICCV_2023_paper.pdf\">X. Shi, Z. Huang, W. Bian, D. Li, M. Zhang, K. C. Cheung, S. See, H. Qin, J. Dai, H. Li, &#8220;Videoflow: Exploiting Temporal Cues for Multi-frame Optical Flow Estimation,&#8221; ICCV, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Liu_GeoMIM_Towards_Better_3D_Knowledge_Transfer_via_Masked_Image_Modeling_ICCV_2023_paper.pdf\">J. Liu, T. Wang, B. Liu, Q. Zhang, Y. Liu, H. Li, &#8220;GeoMIM: Towards Better 3D Knowledge Transfer via Masked Image Modeling for Multi-view 3D Understanding,&#8221; ICCV, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Zhou_SparseMAE_Sparse_Training_Meets_Masked_Autoencoders_ICCV_2023_paper.pdf\">A. Zhou, Y. Li, Z. Qin, J. Liu, J. Pan, R. Zhang, R. Zhao, P. Gao, H. Li, &#8220;SparseMAE: Sparse Training Meets Masked Autoencoders,&#8221; ICCV, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Chen_TrajectoryFormer_3D_Object_Tracking_Transformer_with_Predictive_Trajectory_Hypotheses_ICCV_2023_paper.pdf\">X. Chen, S. Shi, C. Zhang, B. Zhu, Q. Wang, K. C. Cheung, S. See, H. Li, &#8220;TrajectoryFormer: 3D Object Tracking Transformer with Predictive Trajectory Hypotheses,&#8221; ICCV, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Zhang_Decoupled_DETR_Spatially_Disentangling_Localization_and_Classification_for_Improved_End-to-End_ICCV_2023_paper.pdf\">M. Zhang, G. Song, Y. Liu, H. Li, &#8220;Decoupled DETR: Spatially Disentangling Localization and Classification for Improved End-to-End Object Detection,&#8221; ICCV, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Yao_NDC-Scene_Boost_Monocular_3D_Semantic_Scene_Completion_in_Normalized_Device_ICCV_2023_paper.pdf\">J. Yao, C. Li, K. Sun, Y. Cai, H. Li, W. Ouyang, H. Li, &#8220;NDC-Scene: Boost Monocular 3D Semantic Scene Completion in Normalized Device Coordinates Space,&#8221; ICCV, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Zhang_MonoDETR_Depth-guided_Transformer_for_Monocular_3D_Object_Detection_ICCV_2023_paper.pdf\">R. Zhang, H. Qiu, T. Wang, Z. Guo, Z. Cui, Y. Qiao, P. Gao, H. Li, &#8220;MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection.&#8221; ICCV, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/link-springer-com.easyaccess2.lib.cuhk.edu.hk\/article\/10.1186\/s12880-023-01087-2#Ack1\">C. Tao, D. Gu, R. Huang, L. Zhou, Z. Hu, Y. Chen, X. Zhang, &#8220;Hippocampus Segmentation after Brain Tumor Resection via Postoperative Region Synthesis,&#8221; BMC Medical Imaging, <i>23<\/i>(1), 1\u2013142. 2023.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www.nature.com\/articles\/s41467-023-40687-y#Abs1\">Q. Chang, Z. Yan, M. Zhou, H. Zhang, X. He, H. Zhang, L. Baskaran, S. Al&#8217;Aref, H. Li, S. Zhang, D. N. Metaxas, &#8220;Mining multi-center heterogeneous medical data with distributed synthetic learning,&#8221; Nature Communications<i>, <\/i>Vol. 14, pp. 5510, 2023.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Zhu_ConQueR_Query_Contrast_Voxel-DETR_for_3D_Object_Detection_CVPR_2023_paper.html\">B. Zhu, Z. Wang, S. Shi, H. Xu, L. Hong, and H. Li, \u201cConQueR: Query Contrast Voxel-DETR for 3D Object Detection,\u201d In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Shi_FlowFormer_Masked_Cost_Volume_Autoencoding_for_Pretraining_Optical_Flow_Estimation_CVPR_2023_paper.html\">X. Shi, Z. Huang, D. Li, M. Zhang, K. C. Cheung, S. See, H. Qin, J. Dai, and H. Li, &#8220;FlowFormer++: Masked Cost Volume Autoencoding for Pretraining Optical Flow Estimation,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Zhang_Prompt_Generate_Then_Cache_Cascade_of_Foundation_Models_Makes_Strong_CVPR_2023_paper.html\">R. Zhang, X. Hu, B. Li, S. Huang, H. Deng, H. Li, Y. Qiao, and P. Gao, &#8220;Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Li_A_Simple_Baseline_for_Video_Restoration_With_Grouped_Spatial-Temporal_Shift_CVPR_2023_paper.html\">X. Shi, Y. Zhang, K. C. Cheung, S. See, X. Wang, H. Qin, and H. Li, &#8220;A Simple Baseline for Video Restoration with Grouped Spatial-temporal Shift,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Wu_CORA_Adapting_CLIP_for_Open-Vocabulary_Detection_With_Region_Prompting_and_CVPR_2023_paper.html\">X. Wu, F. Zhu, R. Zhao, and H. Li, &#8220;CORA: Adapting CLIP for Open-Vocabulary Detection with Region Prompting and Anchor Pre-Matching,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Zhang_Learning_3D_Representations_From_2D_Pre-Trained_Models_via_Image-to-Point_Masked_CVPR_2023_paper.html\">R. Zhang, L. Wang, Y. Qiao, P. Gao, and H. Li, &#8220;Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Zhang_Starting_From_Non-Parametric_Networks_for_3D_Point_Cloud_Analysis_CVPR_2023_paper.html\">R. Zhang, L. Wang, Z. Guo, Y. Wang, P. Gao, H. Li, and J. Shi, &#8220;Starting from Non-Parametric Networks for 3D Point Cloud Analysis,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Liu_MixMAE_Mixed_and_Masked_Autoencoder_for_Efficient_Pretraining_of_Hierarchical_CVPR_2023_paper.html\">J. Liu, X. Huang, J. Zheng, Y. Liu, and H. Li, &#8220;MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Zhou_Improving_Weakly_Supervised_Temporal_Action_Localization_by_Bridging_Train-Test_Gap_CVPR_2023_paper.html\">J. Zhou, L. Huang, L. Wang, S. Liu, and H. Li, &#8220;Improving Weakly Supervised Temporal Action Localization by Bridging Train-Test Gap in Pseudo Labels,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s11263-023-01790-1\">J. Mao, S. Shi, X. Wang, and H. Li, \u201c3D object detection for autonomous driving: A comprehensive survey,\u201d International Journal of Computer Vision, Apr. 2023. <\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3550454.3555468\">Z. Huang, X. Pan, W. Pan, W. Bian, Y. Xu, K. C. Cheung, G. Zhang, and H. Li, \u201cNeuralMarker: A Framework for Learning General Marker Correspondence,\u201d ACM Transactions on Graphics, vol. 41, no. 6, pp. 1\u201310, Nov. 2022. <\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2022\/hash\/67b0e7c7c2a5780aeefe3b79caac106e-Abstract-Conference.html\">K. Sun, S. Wu, Z. Huang, N. Zhang, Q. Wang, and H. Li, &#8220;Controllable 3D Face Synthesis with Conditional Generative Occupancy Fields,&#8221; In Advances in Neural Information Processing Systems (NeurIPS), 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2022\/hash\/ad1d7a4df30a9c0c46b387815a774a84-Abstract-Conference.html\">R. Zhang, Z. Guo, R. Fang, B. Zhao, D. Wang, Y. Qiao, H. Li, and P. Gao, &#8220;Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training,&#8221; In Advances in Neural Information Processing Systems (NeurIPS), 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2022\/hash\/a92e9165b22d4456fc6d87236e04c266-Abstract-Conference.html\">J. Pan, Z. Lin, X. Zhu, J. Shao, and H. Li, &#8220;ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning,&#8221; In Advances in Neural Information Processing Systems (NeurIPS), 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/www.ecva.net\/papers\/eccv_2022\/papers_ECCV\/html\/6715_ECCV_2022_paper.php#\">Z. Lin, S. Geng, R. Zhang, P. Gao, G. de Melo, X. Wang, J. Dai, Y. Qiao, and H. Li, \u201cFrozen clip models are efficient video learners,\u201d In Computer Vision \u2013 ECCV 2022: 17th European Conference, Tel Aviv, Israel, vol. 13695, pp. 388\u2013404, Nov. 2022.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www.ecva.net\/papers\/eccv_2022\/papers_ECCV\/html\/368_ECCV_2022_paper.php\">X. Chen, S. Shi, B. Zhu, K. C. Cheung, H. Xu, and H. Li, \u201cMPPNet: Multi-frame feature intertwining with proxy points for 3D temporal object detection,\u201d In Computer Vision \u2013 ECCV 2022: 17th European Conference, Tel Aviv, Israel, pp. 680\u2013697, 2022.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www.ecva.net\/papers\/eccv_2022\/papers_ECCV\/html\/4261_ECCV_2022_paper.php\">D. Li, Y. Zhang, K. C. Cheung, X. Wang, H. Qin, and H. Li, \u201cLearning degradation representations for\u00a0image Deblurring,\u201d\u00a0In Computer Vision \u2013 ECCV 2022: 17th European Conference, Tel Aviv, Israel,\u00a0vol. 13678, pp. 736\u2013753, 2022.\u00a0<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s11263-022-01627-3\">D. Li, Y. Zhang, K. L. Law, X. Wang, H. Qin, and H. Li, \u201cEfficient burst raw denoising with variance stabilization and multi-frequency denoising network,\u201d International Journal of Computer Vision, vol. 130, no. 8, pp. 2060\u20132080, Jun. 2022. <\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9878480\" target=\"_blank\" rel=\"noopener\">Y. Cai, KY. Lin, C. Zhang, Q.Wang, X. Wang, H. Li. \u201cLearning a Structured Latent Space for Unsupervised Point Cloud Completion,\u201d In IEEE Conference on Computer Vision and Pattern Recognition, 2022.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9878980\" target=\"_blank\" rel=\"noopener\">R. Zhang, Z. Guo, W. Zhang, K. Li, X. Miao, B. Cui, Y. Qiao, P. Gao, H. Li. \u201cPointCLIP: Point Cloud Understanding by CLIP,\u201d In IEEE Conference on Computer Vision and Pattern Recognition, 2022.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9879573\" target=\"_blank\" rel=\"noopener\">Y. Zhang, D. Li, K.L. Law, X. Wang, H. Qin, H. Li. \u201cIDR: Self-Supervised Image Denoising via Iterative Data Refinement,\u201d In IEEE Conference on Computer Vision and Pattern Recognition, 2022.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9879539\" target=\"_blank\" rel=\"noopener\">Y. Xu, K.Y. Lin, G. Zhang, X. Wang, H. Li. \u201cRNNPose: Recurrent 6-DoF Object Pose Refinement with Robust Correspondence Field Estimation and Pose Optimization,\u201d In IEEE Conference on Computer Vision and Pattern Recognition, 2022.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9878778\" target=\"_blank\" rel=\"noopener\">L. Huang, L. Wang, H. Li. \u201cWeakly Supervised Temporal Action Localization via Representative Snippet Knowledge Propagation,\u201d In IEEE Conference on Computer Vision and Pattern Recognition, 2022.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9577618\" target=\"_blank\" rel=\"noopener\">X. Zhang, Y. Ge, Y. Qiao, and H. Li, \u201cRefining Pseudo Labels with Clustering Consensus over Generations for Unsupervised Object Re-identification,\u201d in IEEE\/ CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 19-25, 2021.<\/a><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-f6946f8 elementor-section-boxed elementor-section-height-default elementor-section-height-default wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no wpr-equal-height-no\" data-id=\"f6946f8\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-03af5b2\" data-id=\"03af5b2\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-4310116 elementor-widget elementor-widget-heading\" data-id=\"4310116\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Professor Dahua LIN<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-0e437c9\" data-id=\"0e437c9\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-bed3fbe elementor-widget elementor-widget-text-editor\" data-id=\"bed3fbe\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul><li><a href=\"https:\/\/icml.cc\/virtual\/2025\/poster\/44792\">Z. Liu, S. Ding, Z. Zhang, X. Dong, P. Zhang, Y. Zang, Y. Cao, D. Lin, J. Wang, &#8220;SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song Generation,&#8221; Forty-Second International Conference on Machine Learning, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/icml.cc\/virtual\/2025\/poster\/43783\">X. Wei, X. Liu, Y. Zang, X. Dong, P. Zhang, Y. Cao, J. Tong, H. Duan, Q. Guo, J. Wang, X. Qiu, D. Lin, &#8220;VideoRoPE: What Makes for Good Video Rotary Position Embedding?&#8221; Forty-Second International Conference on Machine Learning, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/hash\/22a7476e4fd36818777c47e666f61a41-Abstract-Datasets_and_Benchmarks_Track.html\">L. Chen, X. Wei, J. Li, X. Dong, P. Zhang, Y. Zang, Z. Chen, H. Duan, B. Lin, Z. Tang, L. Yuan, Y. Qiao, D. Lin, F. Zhao, J. Wang, &#8220;ShareGPT4Video: Improving Video Understanding and Generation with Better Captions,&#8221; In Advances in Neural Information Processing Systems 37 (2024), pp. 19472-19495.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/dl.acm.org\/doi\/10.1007\/978-3-031-72946-1_19\">Y. Guo, C. Yang, A. Rao, M. Agrawala, D. Lin, B. Dai, &#8220;SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models.&#8221; In Computer Vision \u2013 ECCV 2024: 18th European Conference, Milan, Italy, September 29\u2013October 4, 2024, Proceedings, Part XLII. Springer-Verlag, Berlin, Heidelberg, 330\u2013348.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/11126250\">F. Cui, C. Yin, K. Zhou, Y. Xiao, G. Sun, Q. Xu, Q. Guo, D. Song, D. Lin, X. Zhang, Y. Liang, &#8220;OriGen: Enhancing RTL Code Generation with Code-to-Code Augmentation and Self-Reflection,&#8221; 2024 ACM\/IEEE International Conference On Computer Aided Design (ICCAD), Newark, NJ, USA, 2024, pp. 1-9.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10636756\">J. Duan, X. Li, P. Xu, X. Zhang, S. Yan, Y. Liang, D. Lin, &#8220;Proteus: Simulating the Performance of Distributed DNN Training,&#8221; in IEEE Transactions on Parallel and Distributed Systems, vol. 35, no. 10, pp. 1867-1878, Oct. 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/icml.cc\/virtual\/2025\/poster\/43995\">H. Duanmu, X. Li, Z. Yuan, S. Zheng, J. Duan, X. Zhang, D. Lin, &#8220;MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-Design,&#8221; in ICML 2025, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10658518\">T. Lu, M. Yu, L. Xu, Y. Xiangli, L. Wang, D. Lin, B. Dai, &#8220;Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering,&#8221; 2024 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2024, pp. 20654-20664.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/dl.acm.org\/doi\/10.1007\/978-3-031-72775-7_19\">R. Qian, S. Ding, D. Lin, &#8220;Rethinking Image-to-Video Adaptation: An Object-Centric Perspective.&#8221; In Computer Vision \u2013 ECCV 2024: 18th European Conference, Milan, Italy, September 29\u2013October 4, 2024, Proceedings, Part XLIII. Springer-Verlag, Berlin, Heidelberg, 329\u2013348.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/dl.acm.org\/doi\/10.1007\/978-3-031-72698-9_13\">M. Zhang, T. Wu, T. Wang, T. Wang, Z. Liu, D. Lin, &#8220;Omni6D: Large-Vocabulary 3D Object Dataset for Category-Level 6D Object Pose Estimation.&#8221; In Computer Vision \u2013 ECCV 2024: 18th European Conference, Milan, Italy, September 29\u2013October 4, 2024, Proceedings, Part XXV. Springer-Verlag, Berlin, Heidelberg, 216\u2013232.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/hash\/23f3a0f82d79d985b6076bc84d14f66b-Abstract-Datasets_and_Benchmarks_Track.html\">Z. Wang, Y. Li, Y. Zeng, Y. Fang, Y. Guo, W. Liu, J. Tan, K. Chen, T. Xue, B. Dai, D. Lin, &#8220;HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation.&#8221; In Advances in Neural Information Processing Systems 37 (2024), Dec, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/hash\/4b06cdddb1cde6624c0be1465c7b800f-Abstract-Conference.html\">X. Dong, P. Zhang, Y. Zang, Y. Cao, B. Wang, L. Ouyang, S. Zhang, H. Duan, W. Zhang, Y. Li, H. Yan, Y. Gao, Z. Chen, X. Zhang, W. Li, J. Li, W. Wang, K. Chen, C. He, X. Zhang, J. Dai, Y. Qiao, D. Lin, J. Wang, &#8220;&#8221;InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD.&#8221;&#8221; In Advances in Neural Information Processing Systems 37 (2024), Dec, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/hash\/5aed0d900297bd5593afc14ff452d4a8-Abstract-Datasets_and_Benchmarks_Track.html\">R. Lyu, J. Lin, T. Wang, S. Yang, X. Mao, Y. Chen, R. Xu, H. Huang, C. Zhu, D. Lin, J. Pang, &#8220;MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations.&#8221; In Advances in Neural Information Processing Systems 37 (2024), Dec, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/hash\/a2326c9715a516c91174132e0170073a-Abstract-Datasets_and_Benchmarks_Track.html\">X. Fang, K. Mao, H. Duan, X. Zhao, Y. Li, D. Lin, K. Chen, &#8220;MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding.&#8221; In Advances in Neural Information Processing Systems 37 (2024), Dec, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/hash\/be41269a9fe258f1ecba663b0b402322-Abstract-Conference.html\">Z. Wang, J. Wang, Y. Li, D. Lin, B. Dai, &#8220;InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint.&#8221; In Advances in Neural Information Processing Systems 37 (2024), Dec, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/hash\/3870e6fe5fc75307508ee1458e81e27c-Abstract-Datasets_and_Benchmarks_Track.html\">T. Wu, Y. Xu, R. Po, M. Zhang, G. Yang, J. Wang, Z. Liu, D. Lin, G. Wetzstein, &#8220;FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models.&#8221; In Advances in Neural Information Processing Systems 37 (2024), Dec, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/hash\/7b7d7985f62284060d65f532ed2ea5fa-Abstract-Conference.html\">T. Lan, W. Zhang, C. Xu, H. Huang, D. Lin, K. Chen, X. Mao, &#8220;CriticEval: Evaluating Large-scale Language Model as Critic.&#8221; In Advances in Neural Information Processing Systems 37 (2024), Dec, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openreview.net\/pdf\/f6573e09993ba1372e3b282b7d2b9d1b19cef04f.pdf\">Y. Guo, C. Yang, A. Rao, Z. Liang, Y. Wang, Y. Qiao, M. Agrawala, D. Lin, B. Dai, \u201cAnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning,\u201d ICLR 2024, 2024<\/a>.<\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Qian_Semantics_Meets_Temporal_Correspondence_Self-supervised_Object-centric_Learning_in_Videos_ICCV_2023_paper.pdf\">R. Qian, S. Ding, X. Liu, D. Lin, \u201cSemantics Meets Temporal Correspondence: Self-supervised Object-centric Learning in Videos,\u201d ICCV 2023, 2023.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><div><a href=\"https:\/\/aclanthology.org\/2023.emnlp-demo.17.pdf\"><span lang=\"EN-US\">Y. Li, J. Zhao, D. Zheng, Z. Y. Hu, Z. Chen, X. Su, Y. Huang, S. Huang, D. Lin, M. R. Lyu, L. Wang, \u201cCLEVA: Chinese Language Models EVAluation Platform,\u201d In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 186-217, 2023<\/span><\/a><\/div><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Xiangli_AssetField_Assets_Mining_and_Reconfiguration_in_Ground_Feature_Plane_Representation_ICCV_2023_paper.pdf\">Y. Xiangli, L. Xu, X. Pan, N. Zhao, B. Dai, D. Lin, \u201cAssetField: Assets Mining and Reconfiguration in Ground Feature Plane Representation,\u201d ICCV 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Li_MatrixCity_A_Large-scale_City_Dataset_for_City-scale_Neural_Rendering_and_ICCV_2023_paper.pdf\">Y. Li, L. Xu, Y. Xiangli, Z. Wang, D. Lin, B. Dai, \u201cMatrixCity: A Large-scale City Dataset for City-scale Neural Rendering and Beyond,\u201d ICCV 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Wang_V3Det_Vast_Vocabulary_Visual_Detection_Dataset_ICCV_2023_paper.pdf\">J. Wang, P. Zhang, T. Chu, Y. Cao, Y. Zhou, T. Wu, B. Wang, C. He, D. Lin, \u201cV3Det: Vast Vocabulary Visual Detection Dataset,\u201d ICCV 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><div><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/papers\/Lyu_Controllable_Mesh_Generation_Through_Sparse_Latent_Point_Diffusion_Models_CVPR_2023_paper.pdf\"><span lang=\"EN-US\">Z. Lyu, J. Wang, Y. An, Y. Zhang, D. Lin, B. Dai, \u201cControllable Mesh Generation Through Sparse Latent Point Diffusion Models,\u201d CVPR 2023, 2023<\/span><\/a><\/div><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/papers\/Xu_MV-JAR_Masked_Voxel_Jigsaw_and_Reconstruction_for_LiDAR-Based_Self-Supervised_Pre-Training_CVPR_2023_paper.pdf\">R. Xu, T. Wang, W. Zhang, R. Chen, J. Cao, J. Pang, D. Lin, \u201cMV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based Self-Supervised Pre-Training,\u201d CVPR 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/arxiv.org\/pdf\/2301.12688\">A. Rao, X. Jiang, Y. Guo, L. Xu, L. Yang, L. Jin, D. Lin, B. Dai, \u201cDynamic Storyboard Generation in an Engine-based Virtual Environment for Video Production,\u201d SIGGRAPH 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/supplemental\/Wu_OmniObject3D_Large-Vocabulary_3D_CVPR_2023_supplemental.pdf\">T. Wu, J. Zhang, X. Fu, Y. Wang, J. Ren, L. Pan, W. Wu, L. Yang, J. Wang, C. Qian, D. Lin, Z. Liu, \u201cOmniObject3D: Large-vocabulary 3D Object Dataset for Realistic Perception, Reconstruction and Generation,\u201d CVPR 2023, 2023<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Li_OmniCity_Omnipotent_City_Understanding_With_Multi-Level_and_Multi-View_Images_CVPR_2023_paper.html\">W. Li, Y. Lai, L. Xu, Y. Xiangli, J. Yu, C. He, G. S. Xia, and D. Lin, &#8220;OmniCity: Omnipotent City Understanding with Multi-level and Multi-view Images,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Xu_Grid-Guided_Neural_Radiance_Fields_for_Large_Urban_Scenes_CVPR_2023_paper.html\">L. Xu, Y. Xiangli, S. Peng, X. Pan, N. Zhao, C. Theobalt, B. Dai, and D. Lin, &#8220;Grid-guided Neural Radiance Fields for Large Urban Scenes,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Jin_Multi-Level_Logit_Distillation_CVPR_2023_paper.html\">Y. Jin, J. Wang, and D. Lin, &#8220;Multi-level Logit Distillation,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Xu_MV-JAR_Masked_Voxel_Jigsaw_and_Reconstruction_for_LiDAR-Based_Self-Supervised_Pre-Training_CVPR_2023_paper.html\">R. Xu, T. Wang, W. Zhang, R. Chen, J. Cao, J. Pang, and D. Lin, &#8220;MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based Self-Supervised Pre-Training,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Lyu_Controllable_Mesh_Generation_Through_Sparse_Latent_Point_Diffusion_Models_CVPR_2023_paper.html\">Z. Lyu, J. Wang, Y. An, Y. Zhang, D. Lin, and B. Dai, &#8220;Controllable Mesh Generation Through Sparse Latent Point Diffusion Models,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/html\/Xu_Grid-Guided_Neural_Radiance_Fields_for_Large_Urban_Scenes_CVPR_2023_paper.html\">L. Xu, Y. Xiangli, S. Peng, X. Pan, N. Zhao, C. Theobalt, B. Dai, and D. Lin, &#8220;Grid-guided Neural Radiance Fields for Large Urban Scenes,&#8221; In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2022\/hash\/12d286282e1be5431ea05262a21f415c-Abstract-Conference.html\">Y. Jin, J. Wang, and D. Lin, &#8220;Semi-Supervised Semantic Segmentation via Gentle Teaching Assistant,&#8221; In Advances in Neural Information Processing Systems (NeurIPS), 2022.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9878879\" target=\"_blank\" rel=\"noopener\">H. Duan, N. Zhao, K.Chen, D. Lin. \u201cTransRank: Self-supervised Video Representation Learning via Ranking-based Transformation Recognition,\u201d In IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2022.<\/a><\/li><\/ul><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9578765\" target=\"_blank\" rel=\"noopener\">J. Wang, S. Yan, B. Dai, D. Lin. \u201cScene-aware Generative Network for Human Motion Synthesis,\u201d In IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9710289\" target=\"_blank\" rel=\"noopener\">L. Xu, Y. Xiangli, A. Rao, N. Zhao, B. Dai, Z. Liu, D. Lin. \u201cBlockPlanner: City Block Generation with Vectorized Graph Representation,\u201d In International Conference on Computer Vision, 2021<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/arxiv.org\/abs\/2107.14160\" target=\"_blank\" rel=\"noopener\">T. Wang, X. Zhu, J. Pang, D. Lin. \u201cProbabilistic and Geometric Depth: Detecting Objects in Perspective,\u201d In Conference on Robot Learning, 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9578697\" target=\"_blank\" rel=\"noopener\">X. Zhu, H. Zhou, F. Hong, T. Wang, Y. Ma, W. Li, H. Li, D. Lin. \u201cCylindrical and Asymmetrical 3D Convolution Networks for LiDAR Segmentation,\u201d In IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/hash\/f3bd5ad57c8389a8a1a541a76be463bf-Abstract.html\" target=\"_blank\" rel=\"noopener\">T. Wu, L. Pan, J. Zhang, T. Wang, Z. Liu, D. Lin. \u201cBalanced Chamfer Distance as a Comprehensive Metric for Point Cloud Completion,\u201d In Conference on Neural Information Processing Systems, 2021.<\/a><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-65007bd elementor-section-boxed elementor-section-height-default elementor-section-height-default wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no wpr-equal-height-no\" data-id=\"65007bd\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-2a4da4f\" data-id=\"2a4da4f\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-9f71dfa elementor-widget elementor-widget-heading\" data-id=\"9f71dfa\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Professor Eric LO<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-9367b04\" data-id=\"9367b04\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-cfda598 elementor-widget elementor-widget-text-editor\" data-id=\"cfda598\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul><li><a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3698038.3698514\">Z. Lai, F. Cui, H. Fan, E. Lo, W. Zhou, F. Li, &#8220;Occam&#8217;s Razor for Distributed Protocols.&#8221; In Proceedings of the 2024 ACM Symposium on Cloud Computing (SoCC &#8217;24). Association for Computing Machinery, New York, NY, USA, 618\u2013636.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/dl.acm.org\/doi\/10.14778\/3675034.3675050\">B. Lu, K. Huang, C. M. Liang, T. Wang, E. Lo, &#8220;DEX: Scalable Range Indexing on Disaggregated Memory.&#8221; Proc. VLDB Endow. 17, 10 (June 2024), 2603\u20132616.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/dl.acm.org\/doi\/10.14778\/3675034.3675052\">C. Chang, E. Lo, C. Ye, &#8220;Biathlon: Harnessing Model Resilience for Accelerating ML Inference Pipelines.&#8221; Proc. VLDB Endow. 17, 10 (June 2024), 2631\u20132640.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/neurips.cc\/virtual\/2025\/poster\/119278\">Y. Zhang, J. Xing, B. Xia, S. Liu, B. PENG, X. Tao, P. Wan, E. Lo, J. Jia, &#8220;Training-Free Efficient Video Generation via Dynamic Token Carving,&#8221; in NeurIPS 2025, Dec, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/arxiv.org\/abs\/2501.03931\">Y. Zhang, Y. Y. Liu, B. Xia, B. PENG, Z. Yan, E. Lo, J. Jia, &#8220;MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers,&#8221; in ICCV 2025, Hawaii, USA, Oct 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/arxiv.org\/abs\/2412.09501\">Z. Zhong, C. Wang, Y. Liu, S. Yang, L. Tang, Y. Zhang, J. Li, T. Qu, Y. Li, Y. Chen, S. Yu, S. Wu, E. Lo, S. Liu, J. Jia, &#8220;Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition,&#8221; in ICCV 2025, Oct 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/openreview.net\/pdf?id=u6Ibs4hTJH\">Y. Zhang, J. Xing, E. Lo, J. Jia, \u201cReal-World Image Variation by Aligning Diffusion Inversion Chain,\u201d NeurlIPS 2023, 2023<\/a><\/li><\/ul><ul><li><a href=\"https:\/\/arxiv.org\/abs\/2101.08819\">M. J. Amiri, Z. Lai, L. Patel, B. T. Loo, E. Lo, and W. Zhou, \u201cSaguaro: An Edge Computing-enabled Hierarchical Permissioned Blockchain,\u201d In IEEE International Conference on Data Engineering, Apr. 2023. <\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3588952\">Z. Lai, C. Liu, and E. Lo, \u201cWhen private blockchain meets deterministic database,\u201d In Proceedings of the ACM on Management of Data, vol. 1, no. 1, pp. 1\u201328, 2023.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/arxiv.org\/abs\/2203.11506\" target=\"_blank\" rel=\"noopener\">Z. Zhong, J. Cui, E. Lo, Z. Li, J. Sun, J. Jia. \u201cRebalanced Siamese Contrastive Mining for Long-Tailed Recognition,\u201d Computer Vision and Pattern Recognition, arXiv preprint arXiv:2203.11506. 2022 Mar 22.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li class=\"p1\"><a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3448016.3452786\" target=\"_blank\" rel=\"noopener\">Z. Lai, C. Han, C. Liu, P. Zhang, E. Lo, and B. Kao, \u201cFinding Interesting Frames in Deep Video Analytics: A Top-K Approach,\u201d in 2021 ACM SIGMOD\/PODS International Conference on Management of Data, Xi\u2019 an, Shaanxi, China, June 20-25, 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3448016.3459245\" target=\"_blank\" rel=\"noopener\">Z. Chen, A.W. Fu, M. Jiang, E. Lo, and P. Zhang. \u201cP2H: Efficient Distance Querying on Road Networks by Projected Vertex Separators,\u201d In Proceedings of the 2021 International Conference on Management of Data. Association for Computing Machinery, New York, NY, USA, 313\u2013325. 2021.<\/a><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-bc5b46a elementor-section-boxed elementor-section-height-default elementor-section-height-default wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no wpr-equal-height-no\" data-id=\"bc5b46a\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-40a50db\" data-id=\"40a50db\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-4858e56 elementor-widget elementor-widget-heading\" data-id=\"4858e56\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Professor Helen MENG<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-74b38ef\" data-id=\"74b38ef\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-2b83c82 elementor-widget elementor-widget-text-editor\" data-id=\"2b83c82\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10832228\">H. Guo, F. Xie, D. Yang, H. Lu, X. Wu, H. Meng, &#8220;Addressing Index Collapse of Large-Codebook Speech Tokenizer with Dual-Decoding Product-Quantized Variational Auto-Encoder.&#8221; IEEE Spoken Language Technology Workshop (SLT), Macau, 2-5 December, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10559828\">H. Guo, F. Xie, J. Kang, Y. Xiao, X. Wu, H. Meng, &#8220;QS-TTS: Towards Semi-Supervised Text-to-Speech Synthesis via Vector-Quantized Self-Supervised Speech Representation Learning,&#8221; in\u00a0IEEE Transactions on Audio, Speech and Language Processing, vol. 33, pp. 3307-3319, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10832247\">H. Guo, F. Xie, K. Xie, D Yang, D Guo, X. Wu, H. Meng, &#8220;SoCodec: A Semantic-Ordered Multi-Stream Speech Codec for Efficient Language Model Based Text-to-Speech Synthesis.\u201d 2024 IEEE Spoken Language Technology Workshop (SLT), Macau, 2-5 December, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10889794\">H. Guo, F. Xie, D. Yang, X. Wu, H. Meng, &#8220;Speaking from Coarse to Fine: Improving Neural Codec Language Model via Multi-Scale Speech Coding and Generation,&#8221; ICASSP 2025 &#8211; 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India, 2025, pp. 1-5.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/aclanthology.org\/2024.findings-naacl.144\/\">J. Zhou, M. Hu, J. Li, X. Zhang, X. Wu, I. King, H. Meng, &#8220;Rethinking Machine Ethics \u2013 Can LLMs Perform Moral Reasoning through the Lens of Moral Theories?&#8221; In Findings of the Association for Computational Linguistics: NAACL 2024, pages 2227\u20132242, Mexico City, Mexico. Association for Computational Linguistics.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/hash\/6801fa3fd290229efc490ee0cf1c5687-Abstract-Conference.html\">D. Yang, H. Guo, Y. Wang, R. Huang, X. Li, X. Tan, X. Wu, H. Meng, &#8220;UniAudio 1.5: large language model-driven audio codec is a few-shot audio task learner,&#8221; In Proceedings of the 38th International Conference on Neural Information Processing Systems (NIPS &#8217;24), Vol. 37. Curran Associates Inc., Red Hook, NY, USA, Article 1809, 56802\u201356827.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/arxiv.org\/abs\/2503.23762\">Y. Wang, H. Chen, D. Yang, W. Li, D. Luo, G. Li, S. Yang, Z. Wu, H. Meng, X. Wu, \u201cUniSep: Universal Target Audio Separation with Language Models at Scale\u201d, IEEE ICME 2025, 2025.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www.isca-archive.org\/interspeech_2024\/chen24t_interspeech.html\">X. Chen, D. Yang, D. Wang, X. Wu, Z. Wu, H. Meng, &#8220;CoLM-DSR: Leveraging Neural Codec Language Modeling for Multi-Modal Dysarthric Speech Reconstruction.&#8221; Proc. Interspeech 2024, 4129-4133.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ceur-ws.org\/Vol-3841\/Paper19.pdf\">X. Feng, X. Wu, H. Meng, &#8220;Ontology-grounded Automatic Knowledge Graph Construction by LLM under Wikidata schema,&#8221; KDD Workshop on Human-Interpretable AI 2024, Barcelona, Spain, August 26, 2024.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www.isca-archive.org\/interspeech_2024\/yang24l_interspeech.html\">D. Yang, D. Wang, H. Guo, X. Chen, X. Wu, H. Meng, &#8220;SimpleSpeech: Towards Simple and Efficient Text-to-Speech with Scalar Latent Transformer Diffusion Models.&#8221; Proc. Interspeech 2024, 4398-4402.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/10887601\">M. Wu, J. Xu, X. Chen, H. Meng, &#8220;Integrating Potential Pronunciations for Enhanced Mispronunciation Detection and Diagnosis Ability in LLMs,&#8221; ICASSP 2025 &#8211; 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India, 2025, pp. 1-5.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www.isca-archive.org\/interspeech_2024\/wu24l_interspeech.html#\">M. Wu, J. Xu, X. Wu, H. Meng, &#8220;Prompting Large Language Models with Mispronunciation Detection and Diagnosis Abilities.&#8221; Proc. Interspeech 2024, 2990-2994.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10446921\">Y. Wang, X. Wu, D. Wang, L. Meng and H. Meng, &#8220;UNIT-DSR: Dysarthric Speech Reconstruction System Using Speech Unit Normalization,&#8221; ICASSP 2024 &#8211; 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Republic of, 2024, pp. 12306-12310.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/aclanthology.org\/2024.findings-naacl.259\/\">T. Zhang, J. Ge, H. Luo, Y. Chuang, M. Gao, Y. Gong, X. Wu, Y. Kim, H. Meng, J. Glass, &#8220;Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning.&#8221; In Findings of the Association for Computational Linguistics: NAACL 2024, pages 4131\u20134155, Mexico City, Mexico. Association for Computational Linguistics.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/aclanthology.org\/2024.emnlp-main.746\/\">T. Zhang, K. Li, H. Luo, X. Wu, J. Glass, H.Meng,\u00a0&#8220;Adaptive Query Rewriting: Aligning Rewriters through Marginal Probability of Conversational Answers.&#8221; In\u00a0Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 13444\u201313461, Miami, Florida, USA. Association for Computational Linguistics.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/aclanthology.org\/2025.acl-long.1186\/\">K. Li, T. Zhang, X. Wu, H. Luo, J. Glass, H. Meng, &#8220;Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains.&#8221; In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 24349\u201324364, Vienna, Austria. Association for Computational Linguistics.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/aclanthology.org\/2025.findings-acl.878\/\">K. Li, T. Zhang, Y. Li, H. Luo, A. Moustafa, X. Wu, J. Glass, H. Meng. &#8220;Generate, Discriminate, Evolve: Enhancing Context Faithfulness via Fine-Grained Sentence-Level Self-Evolution.&#8221; In Findings of the Association for Computational Linguistics: ACL 2025, pages 17091\u201317105, Vienna, Austria. Association for Computational Linguistics.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/aclanthology.org\/2025.acl-long.1594\/\">J. Wu, J. Lian, D. Wang, H. Meng, &#8220;SocialCC: Interactive Evaluation for Cultural Competence in Language Agents.&#8221; In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 33242\u201333271, Vienna, Austria. Association for Computational Linguistics.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/aclanthology.org\/2024.acl-long.107\/\">X. Zhang, B. Peng, Y. Tian, J. Zhou, L. Jin, L. Song, H. Mi, H. Meng, &#8220;Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation.&#8221; In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946\u20131965, Bangkok, Thailand. Association for Computational Linguistics.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10446296\"><span lang=\"EN-US\">H. Lu, X. Wu, H. Guo, S. Liu, Z. Wu, H. Meng, \u201cUnifying One-Shot Voice Conversion and Cloning with Disentangled Speech Representations,\u201d In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 11141-11145, 2024<\/span><\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/www1.se.cuhk.edu.hk\/~hccl\/publications\/pub\/Exploiting_Audio-Visual_Features_with_Pretrained_AV-HuBERT_for_Multi-Modal_Dysarthric_Speech_Reconstruction.pdf\">X. Chen, Y. Wang, X. Wu, D. Wang, Z. Wu, X. Liu, H. Meng, \u201cExploiting Audio-Visual Features with Pretrained AV-HuBERT for Multi-Modal Dysarthric Speech Reconstruction,\u201d In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 12341-12345, 2024<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www1.se.cuhk.edu.hk\/~hccl\/publications\/pub\/Stylespeech_Self-Supervised_Style_Enhancing_with_VQ-VAE-Based_Pre-Training_for_Expressive_Audiobook_Speech_Synthesis.pdf\">X. Chen, X. Wang, S. Zhang, L, He, Z. Wu, X. Wu, H. Meng, \u201cStylespeech: Self-Supervised Style Enhancing with VQ-V AE-Based Pre-Training for Expressive Audiobook Speech Synthesis,\u201d In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 12316-12320, 2024<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www1.se.cuhk.edu.hk\/~hccl\/publications\/pub\/mmfp3442-lu-CC-BY.pdf\">H. Lu, X. Wu, Z. Wu, H. Meng, \u201cSpeech TripleNet: End-to-End Disentangled Speech Representation Learning for Content, Trimbre and Prosody,\u201d In MM\u201923: the 31<sup>st<\/sup> ACM International Conference on Multimedia, pp. 2829-2837, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><div><a href=\"https:\/\/openreview.net\/pdf?id=9s7QooDInQ\"><span lang=\"EN-US\">X. Zhang, B. Peng, K. Li, J. Zhou, H. Meng, \u201cSGP-TOD: Building Task Bots Effortlessly via Schema-Guided LLM Prompting,\u201d In Findings of the Association for Computational Linguistics: EMNLP 2023 (EMNLP Findings 2023), pp. 13348-13369, 2023<\/span><\/a><\/div><\/li><\/ul><p>\u00a0<\/p><ul><li><div><a href=\"https:\/\/openreview.net\/pdf?id=noIvPGG8P1\"><span lang=\"EN-US\">H. Luo, T. Zhang, Y. S. Chuang, Y. Gong, Y. Kim, X. Wu, H. Meng, J. Glass, \u201cSearch Augmented Instruction Learning,\u201d In Findings of the Association for Computational Linguistics: EMNLP 2023 (EMNLP Findings 2023), pp.3717-3729, 2023<\/span><\/a><\/div><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/www.isca-archive.org\/interspeech_2023\/li23z_interspeech.pdf\">Y. Li, P. Liu, X. Wu, H. Meng, \u201cPunCantonese: A Benchmark Corpus for Low-Resource Cantonese Punctuation Restoration from Speech Transcripts,\u201d In Interspeech 2023, pp. 2183-2187), 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/openreview.net\/pdf?id=j9HGFxvx3Co\">T. Zhang, L. Tang, W. Fang, H. Luo, X. Wu, H. Meng and J. Glass, \u201cConvRGX: Recognition, Generation, and Extraction for Self-trained Conversational Question Answering,\u201d In DialDoc 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10114504\">H. Guo, F. Xie, X. Wu, F. K. Soong and H. Meng, &#8220;MSMC-TTS: Multi-Stage Multi-Codebook VQ-VAE Based Neural TTS,&#8221; in IEEE\/ACM Transactions on Audio, Speech, and Language Processing, vol. 31, pp. 1811-1824, 2023.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/aclanthology.org\/2022.findings-acl.204\/\">H. Lu, W. Lam, H. Cheng, and H. Meng, \u201cOn controlling fallback responses for grounded dialogue generation,\u201d In Findings of the Association for Computational Linguistics: ACL 2022, pages 2591\u20132601, Dublin, Ireland. Association for Computational Linguistics, May 2022. <\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/aclanthology.org\/2022.dialdoc-1.13\/\">K. Li, T. Zhang, L. Tang, J. Li, H. Lu, X. Wu, and H. Meng, \u201cGrounded dialogue generation with cross-encoding re-ranker, grounding span prediction, and passage dropout,\u201d\u00a0In Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering, pages 123\u2013129, Dublin, Ireland. Association for Computational Linguistics. May 2022.\u00a0<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/www.isca-speech.org\/archive\/interspeech_2022\/guo22_interspeech.html\">H. Guo, H. Lu, X. Wu, and H. Meng, \u201cA multi-scale time-frequency spectrogram discriminator for gan-based non-autoregressive TTS,\u201d In Interspeech 2022, 1566-1570. Sep. 2022. <\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3485447.3512020\" target=\"_blank\" rel=\"noopener\">Y. Deng, W. Zhang, W. Lam, H. Cheng, H. Meng. \u201cUser Satisfaction Estimation with Sequential Dialogue Act Modeling in Goal \u2013 oriented Conversational Systems,\u201d Computation and Language, 2998-3008. 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/aclanthology.org\/2022.findings-acl.204\/\" target=\"_blank\" rel=\"noopener\">H. Lu, W. Lam, H. Cheng, H. Meng. \u201cOn Controlling Fallback Responses for Grounded Dialogue Generation,\u201d In Findings of the Association for Computational Linguistics: ACL 2022, pages 2591\u20132601, Dublin, Ireland. Association for Computational Linguistics. 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/aclanthology.org\/2022.dialdoc-1.13\/\" target=\"_blank\" rel=\"noopener\">K. Li, T. Zhang, L. Tang, J. Li, H. Lu, X. Wu, H. Meng. \u201cGrounded Dialogue Generation with Cross-encoding Re-ranker, Grounding Span Prediction, and Passage Dropout,\u201d In Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering, pages 123\u2013129, Dublin, Ireland. Association for Computational Linguistics. 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9746680\" target=\"_blank\" rel=\"noopener\">D. Wang, S. Liu, X. Wu, H. Lu, L. Sun, X. Liu, H. Meng. \u201cSpeaker Identity Preservation in Dysarthric Speech Reconstruction by Adversarial Speaker Adaptation,\u201d In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 6677-6681). IEEE. 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9747427\" target=\"_blank\" rel=\"noopener\">D. Wang, S. Yang, D. Su, X. Liu, D. Yu, H. Meng. \u201cVCVTS: Multi-speaker Video-to-Speech synthesis via cross-modal knowledge transfer from voice conversion,\u201d In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 7252-7256). IEEE. 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9746900\" target=\"_blank\" rel=\"noopener\">H. Wu, P. Hsu, J. Gao, S. Zhang, S. Huang, J. Kang, Z. Wu, H. Meng, H. Lee. \u201cAdversarial Sample Detection for Speaker Verification by Neural Vocoders,\u201d In ICASSP 2022 &#8211; 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 236-240, doi: 10.1109\/ICASSP43922.2022.9746900. 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9746162\" target=\"_blank\" rel=\"noopener\">H. Wu, H. Kuo, N. Zheng, K. Hung, H. Lee, Y. Tsao, H. Wang, H. Meng. \u201cPartially Fake Audio Detection by Self-Attention-Based Fake Span Discovery,\u201d In ICASSP 2022 &#8211; 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 9236-9240, doi: 10.1109\/ICASSP43922.2022.9746162. 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9747242\" target=\"_blank\" rel=\"noopener\">H. Wu, B. Zheng, X. Li, X. Wu, H.Y. Lee and H. Meng. \u201cCharacterizing the Adversarial Vulnerability of Speech self-Supervised Learning\u201d, In ICASSP 2022 &#8211; 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 3164-3168, doi: 10.1109\/ICASSP43922.2022.9747242. 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9746155\" target=\"_blank\" rel=\"noopener\">X. Wu, S. Hu, Z. Wu, X. Liu, H. Meng. \u201cNeural Architecture Search for Speech Emotion Recognition,\u201d In ICASSP 2022 &#8211; 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 6902-6906, 10.1109\/ICASSP43922.2022.9746155. 2022.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/arxiv.org\/abs\/2101.06066\" target=\"_blank\" rel=\"noopener\">M. Chaudhary, B. Dzodzo, S. Huang, C. H. Lo, M. Lyu, L.Y. Nie, J. Xing, T. Zhang, X. Zhang, J. Zhou, H. Cheng, W. Lam, and H. Meng, \u201cUnstructured Knowledge Access in Task-oriented Dialog Modeling using Language Inference, Knowledge Retrieval and Knowledge-Integrative Response Generation,\u201d in 35th AAAI Conference on Artificial Intelligence 2021 Dialog System Technology Challenge Workshop (DSTC9), February 8-9, 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/www.isca-speech.org\/archive\/interspeech_2021\/lu21d_interspeech.html\" target=\"_blank\" rel=\"noopener\">H. Lu, Z. Wu, X. Wu, X. Li, S. Kang, X. Liu, H. Meng. \u201cVAENAR-TTS: Variational Auto-Encoder Based Non-AutoRegressive Text-to-Speech Synthesis,\u201d In Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August \u2013 3 September 2021, 3775\u20143779. 2021<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/www.isca-speech.org\/archive\/interspeech_2021\/wu21h_interspeech.html\" target=\"_blank\" rel=\"noopener\">M. Wu, K. Li and W.K. Leung, H. Meng. \u201cTransformer Based End-to-End Mispronunciation Detection and Diagnosis,\u201d InProc. Interspeech 2021, 3954\u20143958. 2021.<\/a><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-cc7d8f8 elementor-section-boxed elementor-section-height-default elementor-section-height-default wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no wpr-equal-height-no\" data-id=\"cc7d8f8\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-3c9d5bd\" data-id=\"3c9d5bd\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-eac1463 elementor-widget elementor-widget-heading\" data-id=\"eac1463\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Professor Yeung YAM<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-a25a80e\" data-id=\"a25a80e\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-3e6d00e elementor-widget elementor-widget-text-editor\" data-id=\"3e6d00e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul><li><div><a href=\"https:\/\/drive.google.com\/file\/d\/1mQ8s6X5045fU0Sf5h3pFZbKb6KSTZUH2\/view\"><span lang=\"EN-US\">X. Chen, C. Dai, B. Sun, C. P. Lam, G. Fang, Y. Yam, C. C. L. Wang, \u201cWeaving-Based Freeform Sensing Interface: Design and Fabrication,\u201d ICRA 2024 Workshop, 2024<\/span><\/a><\/div><\/li><\/ul><p>\u00a0<\/p><ul><li class=\"p1\"><a href=\"https:\/\/diglib.eg.org\/xmlui\/handle\/10.2312\/egs20231000\">Z. Liu, E. L. Doubrovski, J. M. P. Geraedts, W. Wang, Y. Yam, and C. C. L. Wang, &#8220;Photogrammetric Reconstruction of a Stolen Statue,&#8221; 2023.<\/a><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-7c41bd5 elementor-section-boxed elementor-section-height-default elementor-section-height-default wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no wpr-equal-height-no\" data-id=\"7c41bd5\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-2360d74\" data-id=\"2360d74\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-0afa15a elementor-widget elementor-widget-heading\" data-id=\"0afa15a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Professor Liwei WANG<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-ac9a961\" data-id=\"ac9a961\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-6dfd880 elementor-widget elementor-widget-text-editor\" data-id=\"6dfd880\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul><li><a href=\"https:\/\/aclanthology.org\/2024.findings-emnlp.162\/\">Z. Hu, Y. Zhong, S. Huang, M. Lyu, L. Wang, &#8220;Enhancing Temporal Modeling of Video LLMs via Time Gating.&#8221; In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 2845\u20132856, Miami, Florida, USA. Association for Computational Linguistics.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/arxiv.org\/abs\/2508.00518\">S. Liang,\u00a0Y. Zhong,\u00a0Z. Hu,\u00a0Y. Tao,\u00a0L. Wang, &#8220;Fine-grained Spatiotemporal Grounding on Egocentric Videos,&#8221; in ICCV 2025, Hawaii, USA, Oct 2025<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/aclanthology.org\/2024.acl-long.135.pdf\">Y. Li, S. Liang, M. R. Lyu, L. Wang, \u201cMaking Long-Context Language Models Better Multi-Hop Reasoners,\u201d In Proceedings of the 62<sup>nd<\/sup> Annual Meeting of the Association for Computational Linguistics (ACL 2024), Vol. 1: Long Papers, pp. 2462-2475, 2024<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2024\/papers\/Zheng_Towards_Learning_a_Generalist_Model_for_Embodied_Navigation_CVPR_2024_paper.pdf\">D. Zheng, S. Huang, L. Zhao, Y. Zhong, L. Wang, \u201cTowards Learning a Generalist Model for Embodied Navigation,\u201d CVPR 2024, pp. 13624-13634, 2024<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/aclanthology.org\/2023.emnlp-main.570.pdf\">S. Huang, J. Zhao, Y. Li, L. Wang, \u201cLearning Preference Model for LLMs via Automatic Preference Data Generation,\u201d In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023), pp. 9187-9199, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/aclanthology.org\/2023.emnlp-demo.17.pdf\">Y. Li, J. Zhao, D. Zheng, Z. Y. Hu, Z. Chen, X. Su, Y. Huang, S. Huang, D. Lin, M. R. Lyu, L. Wang, \u201cCLEVA: Chinese Language Models EVAluation Platform,\u201d In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 186-217, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/aclanthology.org\/2023.acl-long.750.pdf\">Y. Huang, Y. Li, Y. Xu, L. Zhang, R. Gan, J. Zhang, L. Wang, \u201cMVP-Tuning: Multi-View Knowledge Retrieval with Prompt Tuning for Commonsense Reasoning,\u201c In Proceedings of the 61<sup>st<\/sup> Annual Meeting of the Association for Computational Linguistics (ACL 2023). Vol.1: Long Papers, pp. 13417-13432, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2023\/papers\/Hu_VL-PET_Vision-and-Language_Parameter-Efficient_Tuning_via_Granularity_Control_ICCV_2023_paper.pdf\">Z. Y. Hu, Y. Li, M. R. Lyu, L. Wang, \u201cVL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control,\u201d ICCV 2023, 2023<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li class=\"p1\"><a href=\"https:\/\/aclanthology.org\/2022.emnlp-main.721\/\">Y. Li, J. Zhao, M. R. Lyu, and L. Wang, \u201cEliciting Knowledge from Large Pre-Trained Models for Unsupervised Knowledge-Grounded Conversation,\u201d In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10551\u201310564, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. 2022.<\/a><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-823e9cf elementor-section-boxed elementor-section-height-default elementor-section-height-default wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no wpr-equal-height-no\" data-id=\"823e9cf\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-2f395ef\" data-id=\"2f395ef\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-4346e84 elementor-widget elementor-widget-heading\" data-id=\"4346e84\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Professor Guoliang XING<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-7c3e388\" data-id=\"7c3e388\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-60f6cb2 elementor-widget elementor-widget-text-editor\" data-id=\"60f6cb2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul><li><a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3636534.3649352\">S. Shi, N. Ling, Z. Jiang, X. Huang, Y. He, X. Zhao, B. Yang, C. Bian, J. Xia, Z. Yan, R. W. Yeung, G. Xing, &#8220;Soar: Design and Deployment of A Smart Roadside Infrastructure System for Autonomous Driving.&#8221; In Proceedings of the 30th Annual International Conference on Mobile Computing and Networking (ACM MobiCom &#8217;24). Association for Computing Machinery, New York, NY, USA, 139\u2013154.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3636534.3690708\">J. Cui, Y. He, J. Niu, Z. Ouyang, G. Xing, &#8220;\u0391LiDAR: An Adaptive High-Resolution Panoramic LiDAR System. In Proceedings of the 30th Annual International Conference on Mobile Computing and Networking (ACM MobiCom &#8217;24). Association for Computing Machinery, New York, NY, USA, 1515\u20131529.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><div><a href=\"https:\/\/www.usenix.org\/system\/files\/nsdi24-cui.pdf\"><span lang=\"EN-US\">J. Cui, S. Shi, Y. He, J. Niu, G. Xing, Z. Ouyang, \u201cVILAM: Infrastructure-assisted 3D Visual Localization and Mapping for Autonomous Driving,\u201d \u00a0In Proceedings of the 21<sup>st<\/sup> USENIX Symposium on Networked Systems Design and Implementation (NSDI 24), pp. 1831-1845, 2024<\/span><\/a><\/div><\/li><\/ul><p>\u00a0<\/p><ul><li style=\"font-weight: 400;\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10302355\">X. Song, Y. Hua, Y. Yang, G. Xing, F. Liu, L. Xu, T. Song, \u201cDistributed Resource Allocation with Federated Learning for Delay-Sensitive IoV Services,\u201d In IEEE Transactions on Vehicular Technology, Vol. 73, no. 3, pp. 4326-4336, 2024<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li><a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3495243.3560539\">S. Shi, J. Cui, Z. Jiang, Z. Yan, G. Xing, J. Niu, and Z. Ouyang, \u201cVIPS: Real-Time Perception Fusion for Infrastructure-Assisted Autonomous Driving,\u201d In Proceedings of the 28th Annual International Conference on Mobile Computing And Networking, Oct. 2022. <\/a><\/li><\/ul><div>\u00a0<\/div><ul><li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9825928\" target=\"_blank\" rel=\"noopener\">X. Shuai, Y. Shen, S. Jiang, Z. Zhao, W. Lan, G. Xing. \u201cBalanceFL: Addressing Class Imbalance in Long-tail Federated Learning,\u201d Accepted by ACM \/ IEEE International Conference on Information Processing in Sensor Networks (IPSN), 2022.<\/a><\/li><\/ul><p>\u00a0<\/p><ul><li class=\"p1\"><a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3450268.3453532\" target=\"_blank\" rel=\"noopener\">X. Shuai, Y. Shen, Y. Tang, S. Shi, L. Ji, and G. Xing, \u201cmilliEye: A Lightweight mmWave Radar and Camera Fusion System for Robust Object Detection,\u201d in The ACM\/IEEE International Conference on Internet of Things Design and Implementation (IoTDI), May 18-21, 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3450268.3453520\" target=\"_blank\" rel=\"noopener\">Z. Zhao, K. Wang, N. Ling, and G. Xing, \u201cEdgeML: An AutoML Framework for Real-Time Deep Learning on the Edge,\u201d in The ACM\/IEEE International Conference on Internet of Things Design and Implementation (IoTDI), May 18-21, 2021.<\/a><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-fe6fa89 elementor-section-boxed elementor-section-height-default elementor-section-height-default wpr-particle-no wpr-jarallax-no wpr-parallax-no wpr-sticky-section-no wpr-equal-height-no\" data-id=\"fe6fa89\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-d3eb7d4\" data-id=\"d3eb7d4\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-03b8ba5 elementor-widget elementor-widget-heading\" data-id=\"03b8ba5\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Professor Bolei ZHOU<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-e6ccde6\" data-id=\"e6ccde6\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-17ff29e elementor-widget elementor-widget-text-editor\" data-id=\"17ff29e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9577819\" target=\"_blank\" rel=\"noopener\">Y. Liu, J. Zhang, L. Fang, Q. Jiang, and B. Zhou, \u201cMultimodal Motion Prediction with Stacked Transformers,\u201d in IEEE\/ CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 19-25, 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9577381\" target=\"_blank\" rel=\"noopener\">C. Yang, Z. Wu, B. Zhou, and S. Lin, \u201cInstance Localization for Self-supervised Detection Pretraining,\u201d in IEEE\/ CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 19-25, 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9361118\" target=\"_blank\" rel=\"noopener\">J. Sun, L. Yu, P. Dong, B. Lu, and B. Zhou, \u201cAdversarial Inverse Reinforcement Learning with Self-attention Dynamics Model,\u201d IEEE Robot. Autom. Lett., vol. 6, no. 2, pp. 1880-1886, 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17172\" target=\"_blank\" rel=\"noopener\">J. Sun, L. Yu, P. Dong, B. Lu, and B. Zhou, \u201cHiABP: Hierarchical Initialized ABP for Unsupervised Representation Learning,\u201d in 35th AAAI Conference on Artificial Intelligence, February 2-9, 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9578007\" target=\"_blank\" rel=\"noopener\">Y. Shen and B. Zhou, \u201cClosed-Form Factorization of Latent Semantics in GANS,\u201d in IEEE\/ CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 19-25, 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9578038\" target=\"_blank\" rel=\"noopener\">Y. Xu, Y. Shen, J. Zhu, C. Yang, and B. Zhou, \u201cGenerative Hierarchical Features from Synthesizing Images,\u201d in IEEE\/ CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 19-25, 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/proceedings.mlr.press\/v164\/peng22a.html\" target=\"_blank\" rel=\"noopener\">Z. Peng, Q. Li and C. Liu, and B. Zhou. \u201cSafe Driving via Expert Guided Policy Optimization,\u201d In 5th Annual Conference on Robot Learning (CoRL), 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/iclr.cc\/virtual\/2022\/poster\/6470\" target=\"_blank\" rel=\"noopener\">Q. Li, Z. Peng, B. Zhou. \u201cEfficient Learning of Safe Driving Policy via Human-AI Copilot Optimization,\u201d In International Conference on Learning Representations (ICLR), 2022.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/hash\/594ca7adb3277c51a998252e2d4c906e-Abstract.html\" target=\"_blank\" rel=\"noopener\">Z. Peng, Q. Li, K.M. Hui, C. Liu, B. Zhou. \u201cLearning to Simulate Self-driven Particles System with Coordinated Policy Optimization,\u201d In Advances in Neural Information Processing Systems (NeurIPS), 2021.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9829243\/authors#authors\" target=\"_blank\" rel=\"noopener\">Q. Li, Z. Peng, L. Feng, Q. Zhang, Z. Xue and B. Zhou, &#8220;MetaDrive: Composing Diverse Driving Scenarios for Generalizable Reinforcement Learning,&#8221; in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 3, pp. 3461-3475.<\/a><\/li><\/ul><div>\u00a0<\/div><ul><li class=\"p1\"><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9879694\" target=\"_blank\" rel=\"noopener\">X. Liu, Q. Wu, H. Zhou, Y. Xu, R. Qian, X. Lin, X. Zhou, W. Wu, B. Dai, B. Zhou. \u201cLearning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation,\u201d In IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2022.<\/a><\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>PUBLICATIONS The following publications are sorted by the English surnames of our Principal Investigators Professor Rosanna Yuen-Yan CHAN R. Yuen-Yan Chan, &#8220;EEG Transformer for Classifying Students\u2019 Epistemic Cognition States in Educational Contexts,&#8221; in IEEE Access, vol. 13, pp. 23935-23949, 2025. \u00a0 R. Yuen-Yan Chan, &#8220;Towards Rate-Distortion Analysis in Symbol-Based Assistive Communication,&#8221; in ISIT 2024, Jul. 2024. \u00a0 C. Tong, R. Chan, \u201cGaze-Based Interaction Adaptation for People with Involuntary Head Movements (Student Abstract),\u201d In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, no. 21, pp. 23669-23670, 2024. \u00a0 R. Y. -Y. Chan, C. M. V. Wong and Y. N. Yum, &#8220;Predicting Behavior Change in Students With Special Education Needs Using Multimodal Learning Analytics,&#8221; In IEEE Access, vol. 11, pp. 63238-63251, 2023. \u00a0 \u200bC. M. V. Wong, R. Y.Y. Chan, Y. N. Yum, K.Wang. \u201cInternet of Things (IoT)-Enhanced Applied Behavior Analysis (ABA) for Special Education Needs,\u201d Sensors 2021, 21(19), 6693, 16 pages. 2021. Professor Shih-Chi CHEN M. Ye, Y. Lei, S. Gu, Y. Wang, S. Chen, \u201cColloidal Silver Nanoparticles-assisted High-precision Parallel Laser Writing on Glass and Optical Crystals,\u201d Laser &amp; Photonics Reviews, pp. e00047, 2025. \u00a0 C. Liu, X. Yan, S. Li, M. Mohammadnejad, B. Deng, N.X. Fang, Y. Habibi, S. Chen, X. Zhao,\u201cA Metre-scale Vertical Origami Hydrogel Panel for Atmospheric Water Harvesting in Death Valley,\u201d Nature Water, Vol. 3, pp. 714-722, 2025. \u00a0 S. Gu, B. Chen, X. Xu, F. Han, S. Chen, \u201c3D Nanofabrication via Directed Material Assembly: Mechanism, Method, and Future,\u201d Advanced Materials, pp. 2312915, 2025. \u00a0 J. Arab, P. Sharma, S. Chen, \u201cFabrication of Micro-holes in PMMA using Micro-ECDM Process: Geometric Characteristics and EC Discharge Behaviour,\u201d Journal of the Electrochemical Society, Vol. 171, No. 5, pp. 053506, 2024. \u00a0 Y. Wang, Z. Bi, Y. Song, L. Duan, S. C. Chen, \u201cSelective Activation of Photoactivatable Fluorescent Protein based on Binary Holography,\u201d Biomedical Optics Express, Vol.15, no. 5, pp. 3382-3393, 2024. \u00a0 S. Gu, F. Han, M. Ye, S. C. Chen, \u201cMulti-material 3D Nanofabrication based on Ultrafast Lasers and Swellable Hydrogels,\u201d ASPE 2023, 2023 \u00a0 Z. Deng, S. Gu, Z. Feng, S. C. Chen, \u201cHigh-throughput Maskless Optical Lithography System based on Digital Holography,\u201d ASPE 2023, 2023 \u00a0 B. Ding, C. Li and S. C. Chen, \u201cDesign and Characterization of an XY\u03b8Z Nanopositioner,\u201d ASPE 2023, 2023 \u00a0 B. Ding, X. Li, C. Li, Y. Li, S. C. Chen, \u201cA Survey on the Mechanical Design for Piezo-actuated Compliant Micro-positioning Stages,\u201d Review of Scientific Instruments, Vol. 94, pp. 101502, 2023 \u00a0 T. Singh, J. Arab, S. C. Chen, \u201cImprovement on Surface Quality of Inconel-718 Slits via Laser Cutting and Wire Electrochemical Machining Processes,\u201d Optics and Laser Technology, Vol. 167, pp. 109637, 2023 \u00a0 W. Ouyang, X. Xu, W. Lu, N. Zhao, F. Han, and S. Chen,\u201cUltrafast 3D Nanofabrication via Digital Holography,\u201d Nature Communications, Vol. 14, pp. 1716, 2023. \u00a0 C. Li and S. Chen, \u201cDesign of Compliant Mechanisms based on Compliant Building Elements. Part I: Principles,\u201d Precision Engineering, Vol. 81, pp. 207-220, 2023. \u00a0 C. Li and S. Chen, \u201cDesign of Compliant Mechanisms based on Compliant Building Elements. Part II: Practice,\u201d Precision Engineering, Vol. 81, pp. 8-21, 2023. \u00a0 F. Han, S. Gu, A. Klimas, N. Zhao, Y. Zhao, and S. Chen, \u201c3D Nanofabrication via Ultrafast Laser Patterning and Kinetically-regulated Material Assembly,\u201d Science, Vol. 378, No. 6626, pp. 1325-1331, 2022. \u00a0 X. Li, W. Liu, F. Goudail, and S. Chen, \u201cOptimal Nonlinear Stokes-Mueller Polarimetry for Multi-photon Processes,\u201d Optics Letters, Vol. 47, No. 13, pp. 3287-3290, 2022. \u00a0 X. Li, J. Xu, L. Zhang, H. Hu, and S. Chen, \u201cUnderwater Image Restoration via Stokes Decomposition,\u201d Optics Letters, Vol. 47, No. 11, pp. 2854-2857, 2022. \u00a0 S. Yang, F. Li, M.M. Gong, L. Zhang, Z.W. Zhu, H.B. Shen, S.C Chen. \u201cGeneration of Q-switched and Mode-locked Pulses based on PbS\/CdS Saturable Absorber in an Er-doped Fiber Laser,\u201d Journal of Material Chemistry C, 10: 5956-5961. 2022. \u00a0 X. Li, W. Lu, X. Xu, Y. Wang, S.C. Chen. \u201cAdvanced Optical Methods and Materials for Fabricating 3D Tissue Scaffolds,\u201d Light: Advanced Manufacturing, 3: 26. 2022. \u00a0 X. Li, F. Goudail, S.C. Chen. \u201cSelf-calibration for Mueller Polarimeters based on DoFP Polarization Imagers,\u201d Optics Letters, 47(6): 1415-1418. 2022. \u00a0 D. Chen, S. Gu, and S. C. Chen, \u201cStudy of Optical Modulation based on Binary Masks with Finite Pixels,\u201d Opt. Lasers Eng., vol. 142, no. 106604, 2021. \u00a0 X. Liu, X. Li, S.C. Chen. \u201cEnhanced Polarization Demosaicking Network via a Precise Angle of Polarization Loss Calculation Method,\u201d Optics Letters, 47(5): 1065-1068. 2021. \u00a0 M. Ren, W. Lu, Q. Shao, F. Han, W. Ouyang, T. Zhang, C. C.L. Wang, and S.C. Chen. \u201cAberration-free Large-area Stitch-free 3D Nano-printing based on Binary Holography,\u201d Optics Express, 29(26): 44250-263. 2021. Professor Hongsheng LI Z. Lin, D. Liu, R. Zhang, P. Gao, L. Qiu, H. Xiao, H. Qiu, W. Shao, K. Chen, J. Han, S. Huang, Y. Zhang, X. He, Y. Qiao, H. Li, &#8220;SPHINX: A Mixer of Weights, Visual Embeddings and Image Scales for Multi-modal Large Language Models.&#8221; In Computer Vision \u2013 ECCV 2024: 18th European Conference, Milan, Italy, September 29\u2013October 4, 2024, Proceedings, Part LXII. Springer-Verlag, Berlin, Heidelberg, 36\u201355. \u00a0 X. Chen, S. Shi, T. Ma, J. Zhou, S. See, K. C. Cheung, H. Li, \u201cM3Net: Multimodal Multi-task Learning for 3D Detection, Segmentation, and Occupancy Prediction in Autonomous Driving\u201d, AAAI, vol. 39, no. 2, pp. 2275-2283, Apr, 2025. \u00a0 D. Jiang, R. Zhang, Z. Guo, Y. Wu, J. Lei, P. Qiu, P. Lu, Z. Chen, C. Fu, G. Song, P. Gao, Y. Liu, C. Li, H. Li, &#8220;MMSearch: Unveiling the Potential of Large Models as Multi-modal Search Engines,&#8221; in ICLR 2025, Apr, 2025. \u00a0 P. Gao, L. Zhuo, D. Liu, R. Du, X. Luo, L. Qiu, Y. Zhang, C. Lin, R. Huang, S. Geng, R. Zhang, J. Xi, W. Shao, Z. Jiang, T. Yang, W. Ye, H. Tong, J. He, Y. Qiao, H. Li, &#8220;Lumina-T2X: Scalable Flow-based Large Diffusion Transformer for Flexible Resolution Generation,&#8221; in ICLR 2025, Apr, 2025. \u00a0 X. Ju and H. Li, &#8220;DirectTriGS: Triplane-based Gaussian Splatting Field Representation for 3D<\/p>","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":8,"comment_status":"closed","ping_status":"closed","template":"elementor_header_footer","meta":{"_acf_changed":false,"_eb_attr":"","_crdt_document":"","footnotes":""},"class_list":["post-970","page","type-page","status-publish","hentry"],"acf":[],"aioseo_notices":[],"jetpack_shortlink":"https:\/\/wp.me\/PgfUj3-fE","_links":{"self":[{"href":"https:\/\/cpii.hk\/zh\/wp-json\/wp\/v2\/pages\/970","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cpii.hk\/zh\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/cpii.hk\/zh\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/cpii.hk\/zh\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/cpii.hk\/zh\/wp-json\/wp\/v2\/comments?post=970"}],"version-history":[{"count":100,"href":"https:\/\/cpii.hk\/zh\/wp-json\/wp\/v2\/pages\/970\/revisions"}],"predecessor-version":[{"id":10807,"href":"https:\/\/cpii.hk\/zh\/wp-json\/wp\/v2\/pages\/970\/revisions\/10807"}],"wp:attachment":[{"href":"https:\/\/cpii.hk\/zh\/wp-json\/wp\/v2\/media?parent=970"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}