LLM BENCHMARKING WITH LLAMA2: EVALUATING CODE DEVELOPMENT PERFORMANCE ACROSS MULTIPLE PROGRAMMING LANGUAGES
- Generated coded in Section 4 are available here
- Generated code document ion in Section 6 is available here
- Generated unit tests in Section 8 are available here
- Translated code in Section 10 are available here
- Diehl P, Nader N, Moraru M, Brandt SR. Llm benchmarking with llama2: Evaluating code development performance across multiple programming languages. Journal of Machine Learning for Modeling and Computing. 2025;6(3). 10.1615/JMachLearnModelComput.2025058957 Preprint