[1] M.V. Harsha and B.P. Harish, Artificial neural network model for design optimization of 2-stage op-amp, 24th Int. Symp. VLSI Design Test (VDAT), Bhubaneswar, India, 2020.
[2] A. Anthony, C. Kyungwook, and L. Sung Kyu, VLSI placement parameter optimization using deep, 39th Int. Conf. Comput.-Aided Design, 2020.
[3] S. Fujimoto, H. Hoof, and D. Meger, Addressing function approximation error in actor-critic methods, 5th Int. Conf. Machine Learn., 2018.
[5] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V. Kumar, H. Zhu, A. Gupta, P. Abbeel, and S. Levine, Soft actor-critic algorithms and applications, 2019,
https://arxiv.org/abs/1812.05905.
[6] W. Haaswijk, E. Collins, B. Seguin, M. Soeken, F. Kaplan, S. S¨usstrunk, and G.D. Micheli, Deep learning for logic optimization algorithms, IEEE Int. Symp. Circuits Syst. (ISCAS), Florence, 2018.
[7] H. Dong, Z. Ding, and S. Zhang, Deep Reinforcement Learning, Springer Singapore, Singapore, 2020.
[8] A. Hosny, S. Hashemi, M. Shalan, and S. Reda, DRiLLS: Deep reinforcement learning for logic synthesis, 25th Asia South Pacific Design Autom. Conf. (ASP-DAC), Beijing, 2020.
[9] G. Ian, B. Yoshua, and C. Aaron, Deep Learning, MIT Press, 2016.
[10] Y. Li, Y. Wang, Y. Li, R. Zhou, and Z. Lin, An artificial neural network assisted optimization system for analog design space exploration, IEEE Trans. Comput.-Aided Design Integ. Circuits Syst. 39 (2020), no. 10, 2640–2653.
[11] G.L. Michail and L.L. Michael, Learning to select branching rules in the DPLL procedure for satisfiability, Elec[1]tronic Notes Discrete Math. 9 (2001), 344–359.
[13] A. Mirhoseini, H. Pham, Q.V. Le, B. Steiner, Y. Zhou, N. Kumar, M. Norouzi, S. Bengio, and J. Dean, Device placement optimization with reinforcement learning, 34th Int. Conf. Machine Learn., Sydney, 2017.
[14] S.D. Murphy and K.G. McCarthy, Automated design of CMOS operational amplifier using a neural network, 32nd Irish Signals and Systems Conference (ISSC), Athlone, Ireland, 2021.
[15] B. Razavi, Design of Analog CMOS, McGraw-Hill Education (2017).
[16] B. Razavi, RF Microelectronics, Prentice Hall (1997).
[17] J. Schulman, S. Levine, P. Abbeel, M. Jordan and P. Moritz, Trust region policy optimization, 32nd Int. Conf. Machine Learn., 2015.
[19] K. Settaluri, A. Haj-Ali, Q. Huang, K. Hakhamaneshi, and B. Nikolic, AutoCkt: Deep reinforcement learning of analog circuit designs, Design Autom. Test Eur. Conf. Exhib. (DATE), Grenoble, 2020.
[20] M. Sewak, Deep Reinforcement Learning-Frontiers of Artificial Intelligence, Springer, 2019.
[21] R.S. Sutton and A. Barto, Reinforcement Learning: An Introduction, MIT Press, 1992.
[22] P.L. Timothy, J.H. Jonathan, P. Alexander, H. Nicolas, E. Tom, T. Yuval, S. David, and W. Daan, Continuous control with deep reinforcement learning, 4th. Int. ICLR (2016).
[23] H. Wang, K. Wang, J. Yang, L. Shen, N. Sun, H.S. Lee, and S. Han, GCN-RL circuit designer: Transferable transistor sizing with graph neural networks and reinforcement learning, 57th ACM/IEEE Design Autom. Conf. (DAC), San Francisco, 2020.
[25] L. Yann, B. Yoshua, and H. Geoffrey, Deep learning-review, Nature 521 (2015), 436–444.
[26] L. Yi-Chen, L. Jeehyun, A. Anthony, S. Kambiz, and L. Sung Kyu, A generative adversarial framework for clock tree prediction and optimization, ACM Int. Conf. Comput.-Aided Design (ICCAD’19), Westminster, 2019.
[27] Z. Zhao and L. Zhang, Deep reinforcement learning for analog circuit sizing, IEEE Int. Symp. Circuits Syst., Seville, 2020.