Authors :
Puram Anjalidevi; Pulipati Bharath Chandra Seshu; Torlapati Dileep Chakravarthi; Gaddam Mounika
Volume/Issue :
Volume 10 - 2025, Issue 4 - April
Google Scholar :
https://tinyurl.com/pn8tth7s
Scribd :
https://tinyurl.com/56bxynpf
DOI :
https://doi.org/10.38124/ijisrt/25apr1710
Google Scholar
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 15 to 20 days to display the article.
Abstract :
Fixture layout planning is critical for securely holding components during production processes. An optimal
fixture arrangement minimizes surface deformation and prevents crack propagation, thereby maintaining the structural
integrity of components. Traditionally handled by engineers, fixture planning has grown too complex for manual methods
alone. Conventional optimization often gets stuck in local optima, limiting effectiveness. While machine learning offers
improvements, it demands costly, labeled data. This paper proposes a multi-agent reinforcement learning framework with
team decision theory. The approach enables agents to learn collaboratively, improving fixture planning without heavy data
reliance by simulating fixture placement on a flexible surface to minimize deformation under uniform pressure. Multiple
agents select fixture pairs, with deformation estimated using plate bending theory. The environment supports reinforcement
learning and highlights the benefits of strategic, informed placements.
References :
- W. Park, J. Park, H. Kim, N. Kim, and D. Y. Kim, “Assembly part positioning on transformable pin array fixture by active pin maximization and joining point alignment,” IEEE Trans. Autom. Sci. Eng., vol. 19, no. 2, pp. 1047–1057, Apr. 2022.
- Boyle, Y. Rong, and D. C. Brown, “A review and analysis of current computer-aided fixture design approaches,” Robot. Comput. Integr. Manuf., vol. 27, no. 1, pp. 1–12, Feb. 2011.
- S. Gronauer and K. Diepold, “Multi-agent deep reinforcement learning: A survey,” Artif. Intell. Rev., vol. 55, no. 2, pp. 895–943, 2022.
- J. H. van Schuppen and T. Villa, Eds., Coordination Control of Dis tributed Systems (Lecture Notes in Control and Information Sciences), vol. 456. Cham, Switzerland: Springer, 2015.
- F. Liqing and A. S. Kumar, “XML-based representation in a CBR system for fixture design,” Comput.-Aided Des. Appl., vol. 2, nos. 1–4, pp. 339–348, Jan. 2005.
- C. Luo, X. Wang, C. Su, and Z. Ni, “A fixture design retrieving method based on constrained maximum common subgraph,” IEEE Trans. Autom. Sci. Eng., vol. 15, no. 2, pp. 692–704, Apr. 2018.
- L. Xiong, R. Molfino, and M. Zoppi, “Fixture layout optimization for f lexible aerospace parts based on self-reconfigurable swarm intelligent f ixture system,” Int. J. Adv. Manuf. Technol., vol. 66, nos. 9–12, pp. 1305–1313, Jun. 2013.
- S. Wang, Z. Jia, X. Lu, H. Zhang, C. Zhang, and S. Y. Liang, “Simultaneous optimization of fixture and cutting parameters of thin walled workpieces based on particle swarm optimization algorithm,” Simulation, vol. 94, no. 1, pp. 67–76, Jan. 2018.
- Q. Feng, W. Maier, T. Stehle, and H.-C. Möhring, “Optimization of a clamping concept based on machine learning,” Prod. Eng., vol. 16, no. 1, pp. 9–22, Aug. 2021.
- C. Cronrath, A. R. Aderiani, and B. Lennartson, “Enhancing digital twins through reinforcement learning,” in Proc. IEEE 15th Int. Conf. Autom. Sci. Eng. (CASE). Washington, DC, USA: IEEE Computer Society, Aug. 2019, pp. 293–298.
- R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning Series). Cambridge, MA, USA: MIT Press, 2018.
- C. Yu, A. Velu, E. Vinitsky, Y. Wang, A. Bayen, and Y. Wu, “The sur prising effectiveness of PPO in cooperative, multi-agent games,” in Proc. 36th Conf. Neural Inf. Process. Syst. New Orleans, LA, USA: Neural Information Processing Systems Foundation, Mar. 2022, p. 29.
- S. Lu, Y. Hu, and L. Zhang, “Stochastic bandits with graph feedback in non-stationary environments,” in Proc. 35th AAAI Conf. Artif. Intell. (AAAI), vol. 10, 2021, pp. 8758–8766.
- R. Bogenfeld, C. Gorsky, and T. Wille, “An experimental damage tolerance investigation of CFRP composites on a substructural level,” Compos. C, Open Access, vol. 8, Jul. 2022, Art. no. 100267.
- V. Mnih, “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015.
- S. Zamir, “Bayesian games: Games with incomplete information,” in Encyclopedia of Complexity and Systems Science, R. A. Meyers, Ed., New York, NY, USA: Springer, 2009, pp. 426–441.
- S.Veeramani, S. Muthuswamy, K. Sagar, and M. Zoppi, “Artificial intel ligence planners for multi-head path planning of SwarmItFIX agents,” J. Intell. Manuf., vol. 31, no. 4, pp. 815–832, Apr. 2020.
- J. Kudela and R. Matousek, “Recent advances and applications of surrogate models for finite element method computations: A review,” Soft Comput., vol. 26, no. 24, pp. 13709–13733, Dec. 2022.
Fixture layout planning is critical for securely holding components during production processes. An optimal
fixture arrangement minimizes surface deformation and prevents crack propagation, thereby maintaining the structural
integrity of components. Traditionally handled by engineers, fixture planning has grown too complex for manual methods
alone. Conventional optimization often gets stuck in local optima, limiting effectiveness. While machine learning offers
improvements, it demands costly, labeled data. This paper proposes a multi-agent reinforcement learning framework with
team decision theory. The approach enables agents to learn collaboratively, improving fixture planning without heavy data
reliance by simulating fixture placement on a flexible surface to minimize deformation under uniform pressure. Multiple
agents select fixture pairs, with deformation estimated using plate bending theory. The environment supports reinforcement
learning and highlights the benefits of strategic, informed placements.