Measuring and Improving the Efficiency of Python Code Generated by LLMs Using CoT Prompting and Fine-Tuning
The burgeoning sophistication of Artificial Intelligence (AI) has catalyzed the rapid proliferation of Large Language Models (LLMs) within software development. These models are increasingly employed to automate the generation of functionally correct code, address complex computational problems, and...
Saved in:
Main Authors: | Ramya Jonnala, Jeong Yang, Young Lee, Gongbo Liang, Zechun Cao |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/11069268/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Meticulous Thought Defender: Fine-Grained Chain-of-Thought (CoT) for Detecting Prompt Injection Attacks of Large Language Models
by: Lijuan Shi, et al.
Published: (2025-01-01) -
Fine-tuning or prompting on LLMs: evaluating knowledge graph construction task
by: Hussam Ghanem, et al.
Published: (2025-06-01) -
Medical LLMs: Fine-Tuning vs. Retrieval-Augmented Generation
by: Bhagyajit Pingua, et al.
Published: (2025-06-01) -
LLMs on a Budget: System-Level Approaches to Power-Efficient and Scalable Fine-Tuning
by: Kailash Gogineni, et al.
Published: (2025-01-01) -
Використання COTS-технологій для відновлення працездатності й модернізації вузлів та блоків аналогових радіолокаційних станцій радіотехнічних військ
by: М. Р. Арасланов, et al.
Published: (2023-08-01)