VISION-LANGUAGE MODELS FOR VISION TASKS: A SURVEY

MOST VISUAL RECOGNITION STUDIES RELY HEAVILY ON CROWD-LABELLED DATA IN DEEP NEURAL NETWORKS (DNNS) TRAINING, AND THEY USUALLY TRAIN A DNN FOR EACH SINGLE VISUAL RECOGNITION TASK, LEADING TO A LABORIOUS AND TIME-CONSUMING VISUAL RECOGNITION PARADIGM.

FINE-TUNING LANGUAGE MODELS FROM HUMAN PREFERENCES

MOST WORK ON REWARD LEARNING HAS USED SIMULATED ENVIRONMENTS, BUT COMPLEX INFORMATION ABOUT VALUES IS OFTEN EXPRESSED IN NATURAL LANGUAGE, AND WE BELIEVE REWARD LEARNING FOR LANGUAGE IS A KEY TO MAKING RL PRACTICAL AND SAFE FOR REAL-WORLD TASKS.

ALL IN ONE: MULTI-TASK PROMPTING FOR GRAPH NEURAL NETWORKS

INSPIRED BY THE PROMPT LEARNING IN NATURAL LANGUAGE PROCESSING (NLP), WHICH HAS PRESENTED SIGNIFICANT EFFECTIVENESS IN LEVERAGING PRIOR KNOWLEDGE FOR VARIOUS NLP TASKS, WE STUDY THE PROMPTING TOPIC FOR GRAPHS WITH THE MOTIVATION OF FILLING THE GAP BETWEEN PRE-TRAINED MODELS AND VARIOUS GRAPH TASKS.

VOLUME RENDERING OF NEURAL IMPLICIT SURFACES

ACCURATE SAMPLING IS IMPORTANT TO PROVIDE A PRECISE COUPLING OF GEOMETRY AND RADIANCE; AND (III) IT ALLOWS EFFICIENT UNSUPERVISED DISENTANGLEMENT OF SHAPE AND APPEARANCE IN VOLUME RENDERING.