The UTC Graduate School is pleased to announce that Rajon Dey will present Master’s research titled, Fine-Tuning a Domain-Specific Language Model for Truss Structural Analysis on 03/02/2026 at 10:00 am to 11:00 am in ECS 426. Everyone is invited to attend.
Engineering
Chair: Dr. Weidong Wu
Co-Chair: Dr. Joseph Owino
Abstract:
This research investigates the feasibility of fine-tuning a domain-specific language model (LLM) to enhance the accuracy and accessibility of truss structural analysis. General-purpose AI models, such as ChatGPT, often struggle with engineering-specific problems due to insufficient domain knowledge and specialized training. To address this limitation, we propose a practical approach to fine-tuning an LLM and VLM tailored for structural engineering tasks, using truss analysis via the stiffness method as a case study. The project leverages a curated dataset comprising textual and diagrammatic inputs sourced from engineering textbooks, manuals, and solved examples. Truss templates collected from different sources are converted to JSON format and then augmented and visualized using a Python script. The dataset was derived from 27 valid truss templates and extended via geometric augmentation, including mirroring, scaling the horizontal and vertical axes, and shifting the top node coordinates. The loads were randomized to obtain diverse truss structures, along with their supports. Fine- tuning involves preprocessing domain-specific datasets, adapting pre-trained models using supervised learning, and incorporating physics-based constraints. Llama 3.2 Vision Instruct, a multimodal large language model (MLLM), was fine-tuned using the PEFT approach to produce text-based truss problems with respect to the truss image. T5, a transformer-based model, was fine-tuned to process the text-based truss problems and produce in a specific format suitable for analysis using the stiffness matrix. The model’s accuracy and computational performance were evaluated against the ground truth of node coordinates, elements, loads, and support conditions. This research demonstrates the potential of fine-tuned domain-specific LLMs to automate engineering analysis and design workflows, offering engineers and students a practical tool for rapid and accurate structural analysis. Ultimately, given inputs of design ideas and conceptual sketches, a fine-tuned domain-specific large language model can provide highly accurate and detailed analysis along with practical design solutions. By addressing challenges in training/validation dataset preparation, fine-tuning, and evaluation metrics for assessing domain-specific LLM performance, this study advances the practical application of AI in structural engineering.