Conference Information
FLLM 2024: International Symposium on Foundation and Large Language Models
https://fllm2024.fllm-conference.org/index.php
Submission Date:
2024-06-30
Notification Date:
2024-09-15
Conference Date:
2024-11-26
Location:
Dubai, UAE
Years:
2
Viewed: 561   Tracked: 0   Attend: 0

Call For Papers
With the emergence of foundation models (FMs) and Large Language Models (LLMs) that are trained on large amounts of data at scale and adaptable to a wide range of downstream applications, Artificial intelligence is experiencing a paradigm revolution. BERT, T5, ChatGPT, GPT-4, Falcon 180B, Codex, DALL-E, Whisper, and CLIP are now the foundation for new applications ranging from computer vision to protein sequence study and from speech recognition to coding. Earlier models had a reputation of starting from scratch with each new challenge. The capacity to experiment with, examine, and comprehend the capabilities and potentials of next-generation FMs is critical to undertaking this research and guiding its path. Nevertheless, these models are currently inaccessible as the resources required to train these models are highly concentrated in industry, and even the assets (data, code) required to replicate their training are frequently not released due to their demand in the real-time industry. At the moment, mostly large tech companies such as OpenAI, Google, Facebook, and Baidu can afford to construct FMs and LLMS. Despite the expected widely publicized use of FMs and LLMS, we still lack a comprehensive knowledge of how they operate, why they underperform, and what they are even capable of because of their emerging global qualities. To deal with these problems, we believe that much critical research on FMs and LLMS would necessitate extensive multidisciplinary collaboration, given their essentially social and technical structure.

The International Conference on Foundation and Large Language Models (FLLM) addresses the architectures, applications, challenges, approaches, and future directions. We invite the submission of original papers on all topics related to FLLMs, with special interest in but not limited to:

    Architectures and Systems
        Transformers and Attention
        Bidirectional Encoding
        Autoregressive Models
        Massive GPU Systems
        Prompt Engineering
        Multimodal LLMs
        Fine-tuning
    Challenges
        Hallucination
        Cost of Creation and Training
        Energy and Sustainability Issues
        integration
        Safety and Trustworthiness
        Interpretability
        Fairness
        Social Impact
    Future Directions
        Generative AI
        Explainability and EXplainable AI
        Retrieval Augmented Generation (RAG)
        Federated Learning for FLLM
        Large Language Models Fine-Tuning on Graphs
        Data Augmentation
    Natural Language Processing Applications
        Generation
        Summarization
        Rewrite
        Search
        Question Answering
        Language Comprehension and Complex Reasoning
        Clustering and Classification
    Applications
        Natural Language Processing
        Communication Systems
        Security and Privacy
        Image Processing and Computer Vision
        Life Sciences
        Financial Systems
Last updated by Dou Sun in 2024-03-24
Recommendation