SLCBASAL
blog post
log
software agent agency notes duration of work for this day
posted: 4-18-2026  |  tag: misc

REFLECTION: maybe something which I could have done better today was get things done which actually contribute to the intended tasks I need to complete. As of right now I am actually doing very little and this doesn't contribute to any of it.

Wikipedia: software agent.

“The term agent is derived from the Latin agere (to do): an agreement to act on one's behalf. Such "action on behalf of" implies the authority to decide which, if any, action is appropriate”

Something interesting here is that the authority to decide action is evoked. When I am preforming a throwing of a ball what kinds of authorities are present which give me the authority to decide appropriate action. I am immediately thinking of the faculties of someone and the surrounding environment. In this sense is there a physics based authority which is related to

Here it is evoked in reference to the “behalf of” someone. On behalf of could refer to being the representative of someone, being in support of someone, to please someone. Being representative of someone stands out to me as putting another in one of the most subordinate conditions while also being the most freeing in reference to a self-agent. To represent yourself seems different than to support yourself or to please yourself. When you represent yourself you might constitute yourself as in meaning that you are corresponding in essence to the self, in that you would serve yourself as a counterpart or symbol of yourself in your own agency. When someone represents themselves they are acting on behalf of themselves to become a counterpart or image of themselves. In this part of being an agent is being a counterpart of yourself which decides in appropriate action.

This statement however, separates the self and the agent in a self-agent. This might be useful as a separation for thinking about software agents later on and secondary agents as with the subordinate and autonomous agents. Below are the conditions for the model used to be a software agent as in the wikipedia article

‘- The basic attributes of an autonomous software agent are that agents: are not strictly invoked for a task, but activate themselves, may reside in wait status on a host, perceiving context, may get to run status on a host upon starting conditions, do not require interaction of user, may invoke other tasks including communication. -’

Here agent is being used as a tool to describe something with a certain level of being able to do thing without having a host present to do things in that it is able to invoke tasks and therefore create action without being used nesscarily like a tool or object. This is mostly ebing used to differentiate the Objects in philosophy of AI and this kind of model. Objects are defined in terms of methods and attributes and these methods and attributes are separate, a agent is defined in terms of it’s behavior.

I wonder if you would define other sorts of systems without these requirements in terms of their behavior as in if this approach and demonstration of software agent is just a method for thinking about the kinds of things which are being created in LLMs and Software systems right now. There is the idea of persistence as a measure for the way which these systems function. In this persistence, there is a continuous running of code once started allowing for continuous decision making. There is also goal-directed behavior, task selection and prioritization as pieces being attributed to autonomy in that with autonomy, you are able to select action and with agency you can act here.

Nwana, H. S. (1996). Software agents: an overview. Knowledge Engineering Review, 11(03), 205–244. https://doi.org/10.1017/S026988890000789X

Something interesting here which is mentioned on page 5 is the struggles with a lack of ownership and therefore confusion with the term software agent, and agent as a whole. Agent is used frequently within the context of its colloquial double meaning so many things which a philosopher might never put under the label of agent in their own field are put there in others just because the word is publicly available. Because of this many people believe they know what they are talking about when they write and talk about agents, and they may know to an extent some kind of agent but they do not have a complete view of them. They mention fuzzy logic here as a word which is kind of owned by philosophy.

This does however mean that software agents are able to be classified in different kinds of ways and they mention this classification as one which seems interesting in that they are role classified. “arch agents, reportagents, presentation agents, navigation agents, role-playing agents, management agents,search and retrieval agents, domain-specific agents, development agents, analysis and designagents, testing agents, packaging agents and help agents” here there any many different kinds of agents and they are classified by role but I wonder what this might have to do with the way we think about human agents.

Do we can human agents the line cook agent, bus driver agent, packaging agent, datasciencestudent agent. We could but only if we were thinking about them as tools instead of selves. There is a certain removal of agency from the agent we think of in software agents, though they can act without a host, they act for a host. This reminds me a lot of he concepts of free will and acting on behalf of someone again. We think of these agents as acting of behalf of someone rather than acting alone.

When I throw a ball to my friend am I acting on behalf of my friend for the sake of the game. Am I acting on behalf of myself. Is the action singular in that I am throwing the ball and that is the act and there is no behalf at all. Would it be useful to think of agents as with software agents as just preforming the act and not acting of behalf of any specific thing at all.

There are some classifications of software agents I am also interested in in this paper.

Mobile or static agents:: they are able to move around and through different networks.

Deliberative or Reactive agents:: the agents either deliberate based on some internal symbolic model in which they engage in planning or negotiation to achieve coordinated action with other agents. While reactive agents do not have any internal model and use a stimulus response model and respond to the present state of the environment.

Autonomy,Learning, and Cooperation:: Autonomy meaning without human guidance, proactively. Cooperation as the ability to interact with other agents. Learning is harder and can be increased performance over time but also ability to learn colloquially.

Why is cooperation needed to be a kind of qualifier type for these software agents?

Would it make sense to think of these software as being less analogous to an agent and more analogous to a mind or means of self?

Something to think about is maybe that this paper is nearly completely unrelated to a lot of the philosophy currently out there that I would like to talk about. This paper could instead be used to analyze the kinds of tasks which we are currently using ai agents for and how these differ from previous views of these kinds of agents and their uses. Section 5 and all of it’s subsections just kind of give an overview of these operations.

I think maybe one of the more interesting things this paper has to offer is the argument that it is not the social but the structural sorts of issues that would be the main issues in a future with ai agents. Those being privacy, responsibility (when you give your responsibility to an agent how do you hold it accountable I consistently see this point expressed in the statement, “you cannot have an ai manage tasks that involve people or manage people, because it cannot be held accountable and we must be able to hold things accountable in our systems.”

Franklin, Stan, and Art Graesser. "Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents." International workshop on agent theories, architectures, and languages. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996.

This paper looks at some fringe definitions of agents which they are expressing as a result of an inquiry to be able to explain themselves directly when people ask them what the difference is between an autonomous agent and just a computer program.

“The MuBot Agent [http://www.crystaliz.com/logicware/mubot.html] "The term agent is used to represent two orthogonal concepts. The first is the agent's ability for autonomous execution. The second is the agent's ability to perform domain oriented Reasoning."”

“Meaning that the agent can perform two tasks the first task is to do autonomous execution the second is to have that be orientated by reasoning of the environment around it

The AIMA Agent [17, p. 33] "An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors. "”

This definition seems to be closer to the definition of a program but it is also some how much more similar to the definitions which we give for a software agent

“The Maes Agent [14, p. 108] "Autonomous agents are computational systems that inhabit some complex dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are Designed."”

“The KidSim Agent [18] "Let us define an agent as a persistent software entity dedicated to a specific purpose. 'Persistent' distinguishes agents from subroutines; agents have their own ideas about how to accomplish tasks, their own agendas. 'Special purpose' distinguishes them from entire multifunction applications; agents are typically much smaller."”

There is an idea expressed here in the kidsim agent definition which I believe is probably one of the closest to the ideas of a software agent which make sense to me that being that persistent is something which matters.

“Russell “and Norvig put it this way: "The notion of an agent is meant to be a tool for analyzing systems, not an absolute characterization that divides the world into agents and non-agents." [17, p. 33]”

This statement here is probably one of the most impactful sorts of statements on this that I have received and that is that we are using the term agent to analyze these systems not to characterize them, in a way which divides them from other sorts of programs especially when talking about ai agents and software agents. A lot of confusion I think comes here as previously with the way agent exists as a word in our culture already and not just as a philosophical word. A lot of confusion also comes when you realize that software agent even amongst it’s primary situation is still a word which is coming to characterize more than it is being used as a tool to help better understand the way which we use artificial intelligence. A mathematical definition does not necessarily fail in how it is defining but rather how that definition is being used to better understand something like the kinds of technology that is software agents.

Is it good to think of a software agent more in the way we think of a molecule in it’s chemical reactions upon coming in contact with matter, would it be good to think of these interactions as an act with the reaction. Would it be good to think of software agents as a living being if it allows us to do certain things with them that we would not otherwise, giving us access to metaphors otherwise we would not have access to?

This look at autonomous agent here may be useful in that we would not normally based on previous definitions consider these autonomous agents as they do not do things on their own behalf but on the behalf of the user who is within the interface

“An a u t o n o m o u s agent is a system situated within and a part o f an environment that senses that environment and acts on it, over time, in pursuit o f its own agenda and so as to effect what it senses in the future.”

I like that they draw on descriptive tasks to look at the ways which we are looking at software agents rather than just the tasks which are defining. As often these descriptive tasks cause you to look more closely at the kinds of ways which you are interacting with a term. For example, defining something gives you nothing in it’s definition but the act of defining allows you to better understand your own thinking about a certain word. This is a bit why I believe AI causes brain drain in that the response and getting the response are intrinsically intertwined tasks, understanding the response is often no where near as difficult as getting to the response down a series of steps. I wonder how this could be related to wittgensteins philosophy of mathematics,

I also enjoy the way which they draw on biological taxonomy in later characterizations here in that they attempt to use different logical and though paradigms to better clarify their own thinking here maybe it could be useful to draw on these kinds of trees and maps to better clarify my own thinking about agents and agency. We have

Hierarchical with the chart above.

Environmental classification

Taxonomic classification

Control structures

Language structures.

Brittancia article// software agent This article mostly talks about very basic sorts of things about software agents and widely known uses as of late.

IBM Cole Styker on the evolution of AI agents:

Rather than think and talk what if AI agents could also do. This talks about feedback loops as the very beginning of AI beginning to become something in existence as it is able to feed information back into itself. Maybe something else to check out here would be this paper as it is the source of neural networks which are now common.

Warren S. McCulloch and Walter Pitts published “A Logical Calculus of the Ideas Immanent in Nervous Activity” in the Bulletin of Mathematical Biophysics.3 Also maybe this one

Oliver Selfridge, in his 1959 paper “Pandemonium - A Paradigm for Learning,” established conceptual structures that later agentic architectures would echo.5 This paper gives a lot of good historical papers to look into and really sets the ground work for how we got to where we are now. Prompt engineering could be interesting to look into as this is a method which implies that humans are now being influenced to act in certain sorts of ways with ai and it’s systems.

‘Personal Autonomy’ Stanford Encyclopedia

“Governing oneself is no Guarantee that one will have a greater range of options in the future. … what are the necessary and sufficient features of this self-relation?”

Looking at personal autonomy it didn’t really come to me to ask the question about greater future options though this is something which I have related to agency and autonomy in the personal essays which I have written. Having future options is something which I think could have value to an agent.

Here autonomy means to govern oneself and to say you would like to be autonomous means to say that you would like to have the ability to govern oneself, and with that you are saying essentially that those powers which prevent you from governing yourself in any sort of way are illegitimate. In saying that those powers are illegitimate, you are legitimizing yourself. You are saying that the action of enacting such and such thing is now within the power of the autonomous agent rather than in the power of another

“An agent is one who acts. In order to act, one must initiate one’s action.”

Someone is an agent in so far as they act, meaning that when you act you are an agent. Her own judgements are authoritative.

How might we relate the idea of her own judgements being authoritative to the ideas of a subordinate agent in that the agent is the one who acts and chase judgements are authoritative but what of someone who acts of the behalf of another. Woah

“an agent must regard her own judgment about how to act as authoritative—even if it is only the judgment that she should follow the command or advice of someone else.” this could be something but maybe the question lies more in that there is a relationship between the agent and the other.

“a person can have an authoritative status with respect to her motives without having any real power over them.” this could probably be the most relevant thing out of all of these to executive functioning in that someone can have authority over their own self but also not be able to govern the self to act and therefore fail some part of autonomy. Not in that the one loses authority ever but that their authority is undermined and taken somewhere elsewhere maybe it would be good to examine this metaphor her in reference to what it means to undermine someone autonomy

“What distinguishes autonomy-undermining influences on a person’s decision, intention, or will from those motivating forces that merely play a role in the self-governing process?”

Certain influences effect our ability to govern ourselves but what are these can there be a influence which effects our person autonomy that is innate to the self as under and instance of drugs or some other changing instance? Also with this something like impulses as a power at odds with the person autonomy of someone how external are those forces?

“It is difficult to answer these questions when the governing agent and the agent she governs are one and the same.”

“Coherentist” an agent governs her own action iff she is motivated to act as she does because this motive coheres with some mental state which represents her point of view.

“According to this intuition, if someone repudiates, or in some other way dissociates herself from, the causal efficacy of her own motives, then the power of these motives is independent of her authority.”

These attitudes need not be reasonable or she may need not know why or how she has the desires for them to have a major effect on the way which she is acting in the world. How might this contribute to the way which we could conceptualize is this an empathetic or correct approach really how do the mental activities contribute.

“reasons-responsive conception of autonomous agency, an agent does not really govern herself unless her motives, or the mental processes that produce them, are responsive to a sufficiently wide range of reasons for and against behaving as she does.”

Here we could look at relation as in relation to agency and autonomous action Something I might note here is that so far the subjects of these seems to go in line iwth the kinds of things we studied in philosophy of mind to deal with cognition and reasoning what does it mean to be a reasoning agent and how might focusing on behavior vs reasoning effect your work . someone who cannot respond to reasons might have some form of limited reason? “(i) the determining causes that prevent an agent from governing herself when she employs her reason from (ii) the causes that determine how an agent governs herself when she reasons. According to this incompatibilist conception,”

I am unsure about this one.

back to home