چكيده لاتين
The growing popularity of social networks has amplified their capacity to shape public opinion. Consequently, powerful entities have increasingly sought to manipulate public sentiment through various methods, including the use of artificial accounts. The influence of these artificial users poses significant risks to freedom of thought, expression, and decision-making in society, while also creating social, political, cultural, and economic challenges. To counter such influence, identifying factors that enhance the effectiveness of artificial users is critical. Despite notable advancements in modeling and analyzing artificial users’ impacts, a comprehensive understanding of the precise mechanisms driving their role in shaping public opinion remains incomplete. This study introduces the Directed Homophilic Preferential Attachment (DHPA) model, which dynamically simulates both network connections and opinion evolution dynamics by incorporating user characteristics, such as their willingness to express opinion to form relationships. The DHPA model uniquely integrates homophily (the tendency to connect with like-minded users) and social phenomena like echo chambers and spiral of silence, which drive consensus or polarization in public opinion. By combining these features, the model generates more realistic outcomes compared to existing frameworks. Furthermore, it introduces novel metrics for evaluating opinion formation and the efficacy of artificial entities, addressing the current lack of robust assessment tools in this domain. The DHPA-generated networks exhibit structural properties such as scale-free degree distribution and small-world characteristics that align closely with real-world social networks. This alignment enables the model to analyze diverse scenarios for achieving consensus, polarization, or divergence in public opinion. Through systematic investigation, key factors influencing artificial users’ effectiveness were identified, including attachment strategies (e.g., homophilic preferential attachment), intelligence levels, and lifespan. Results demonstrate that even a limited presence of artificial users can significantly steer public opinion toward consensus, particularly when employing targeted strategies. Notably, high-intelligence artificial agents (mimicking human behavior) outperformed larger, less smarter cohorts, achieving over 90% success in neutralizing opposing influences. These findings underscore the importance of understanding artificial users’ operational dynamics to develop countermeasures. Policymakers, social media platforms, and governance organizations can leverage these insights to design interventions that mitigate manipulation risks while preserving democratic discourse. The study highlights the urgency of addressing artificial influence in an era where digital and synthetic realities increasingly intersect, offering both theoretical advancements and practical tools to safeguard public opinion integrity.