An artificial general intelligence (AGI) agent is capable of achieving general goals. An agent that reasons about generality is complicated. The world the AGI is interacting with, however, is much more complicated than the agent itself. Further, the agent only observes a part of the world at a time and thus needs to construct its own summary of the past and the summary is the agent’s subjective state. All components that the agent has, except the one that generates the agent’s state, take the agent’s state as input and generate desired outputs. What components the agent should maintain and how the specific components interact with each other are two fundamental questions. Specific questions arise from these two fundamental questions. For example, what are good agent states and what are bad ones? What should the world model take and produce? Are sub-tasks necessary? What sub-tasks are good and what are bad? These questions are about designing architecture and identifying the purposes of each component in the architecture, rather than specific ways to implement each component. Our social welcomes everyone who is interested in brainstorming such an architecture design.