Fine-Tuning Llama 2 forAutomatedAd Caption Generation


Authors : Kirtan Panchal; Shubhangi Tidake

Volume/Issue : Volume 10 - 2025, Issue 7 - July


Google Scholar : https://tinyurl.com/4u782pp5

Scribd : https://tinyurl.com/43y79263

DOI : https://doi.org/10.38124/ijisrt/25jul236

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.

Note : Google Scholar may take 30 to 40 days to display the article.


Abstract : Generating engaging and relevant ad captions poses a significant challenge for advertisers. This research addresses this issue by improving Llama2, an advanced language model, through fine- tuning with a custom dataset created specifically for ad captioning. Techniques such as quantization and matrix decomposition were employed to enhance Llama2's ability to produce captivating and descriptive ad captions.The primary objective was to streamline and improve the efficiency of the caption creation process. Performance evaluation was conducted via A/B testing, comparing our enhanced Llama2 against conventional captioning methods. Key performance indicators included click- through rates, user engagement, and actionstaken,such as purchases.Experimental results demonstrated that the fine-tuned Llama2 effectively generates captions that resonate with audiences, encouraging actionable responses. This study advances the capabilities of language models in advertising and provides valuable insights for marketers looking to enhance the impact of their ad campaigns in the digital landscape.

Keywords : Fine-Tuning, Llama2, Natural Language Processing (NLP), Caption Generation, Advertising, Transfer Learning, Quantization, Text Generation.

References :

  1. H. Touvron et al., “Llama 2: Open foundation and fine-tuned chat models,” arXiv preprintarXiv: 2307.09288, 2023.
  2. S. Sehgal, J. Sharma, and N. Chaudhary, “Generating image captions based on deep learning and natural language processing,” in 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions)(ICRITO), 2020, pp. 165–169.
  3. J.  Wei  et  al.,  “Emergent  abilities  of  large  language  models,”  arXiv  preprintarXiv: 2206.07682, 2022.
  4. W. Yu et al., “A survey of knowledge-enhanced text generation,” ACM Comput Surv, vol. 54, no. 11s, pp. 1–38, 2022.
  5. M. H. Bakri, “The effectiveness of advertising in digital marketing towards customer satisfaction,” Journal of Technology Management and Technopreneurship (JTMT), vol. 8, no. 1, pp. 72–82, 2020.
  6. C. Jeong, “Fine-tuning and utilization methods of domain-specific llms,” arXiv preprintarXiv: 2401.02981, 2024.
  7. S. Lermen, C. Rogers-Smith, and J. Ladish, “Lora fine-tuning efficiently undoes safety training in llama 2-chat 70b,” arXiv preprint arXiv:2310.20624, 2023.
  8. T. Dettmers, A. Pagnoni, A. Holtzman, and L. Zettlemoyer, “Qlora: Efficient finetuning of quantized llms,” Adv Neural Inf Process Syst, vol. 36, 2024.
  9. V. Ashish, “Attention is all you need,” Adv Neural Inf Process Syst, vol. 30, p. I, 2017.
  10. S. Dathathri et al., “Plug and play language models: A simple approach to controlled text generation,” arXiv preprint arXiv:1912.02164, 2019.

Generating engaging and relevant ad captions poses a significant challenge for advertisers. This research addresses this issue by improving Llama2, an advanced language model, through fine- tuning with a custom dataset created specifically for ad captioning. Techniques such as quantization and matrix decomposition were employed to enhance Llama2's ability to produce captivating and descriptive ad captions.The primary objective was to streamline and improve the efficiency of the caption creation process. Performance evaluation was conducted via A/B testing, comparing our enhanced Llama2 against conventional captioning methods. Key performance indicators included click- through rates, user engagement, and actionstaken,such as purchases.Experimental results demonstrated that the fine-tuned Llama2 effectively generates captions that resonate with audiences, encouraging actionable responses. This study advances the capabilities of language models in advertising and provides valuable insights for marketers looking to enhance the impact of their ad campaigns in the digital landscape.

Keywords : Fine-Tuning, Llama2, Natural Language Processing (NLP), Caption Generation, Advertising, Transfer Learning, Quantization, Text Generation.

CALL FOR PAPERS


Paper Submission Last Date
31 - December - 2025

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe