Love

I don't believe in that idea of love that people 'want someone to be happy for their own sake'. it's one of those popular delusions IMO.
 
What if someone loves another so much that they would voluntarily die for them, to defend them?

Then, I would argue, they don't want to lose that person---I would risk my life to defend my computer and CD backups from being stolen or risk my life to save my PC/backups destroyed in a fire, because there is a lot content on it which is simply irreplaceable, and if I lost everything there I would have lost everything important to me that was really worth living for.---It's not that I love the content of my computer for it's own sake, merely that I would rather risk my life than be without risk of bodily injury and condemned to live having lost it.

If you're willing to die for liberty, etc. it's more to do with your not wanting to live without it---weighing up the harm of 'risking your life' against the harm of 'letting xyz be destroyed' and deciding what's most important to your selfish interests and following that. (especially in a world where parental obligation is actually a law, and heavily socially sanctioned, such that the culture would have a parent would feel sooo much guilt if they didn't risk their life to save their child, they might well rather die in hopeless attempt trying than face society for the rest of their life having done nothing. In a case like that, never even mind the 'decades of investment and attachment' at risk, but in something like a 'friend' or 'wife' etc. rather than child, I would think some personal rather than socially biased calculus like that would apply (though there is also the 'I better do something, because was it me I would hope someone would do the same' social aspect---participating in upholding 'compassionate' norms as perceived in one's own long term interests).)

No one has ever given me adequate reason to think 'persons' are distinguished from this pattern, and thus (as I actually told someone just the other day) I don't distinguish between 'valuing' and 'loving' (to which I met condescension rather than argument, their claiming my seeing through the popular delusion rather than just taking it for granted was actually that I've 'never really loved anyone' lol rather than that they never bothered to scratch the surface lest they find a less than admirable truth beneath).
 
I believe in love and all, but one thing is for sure is that it wont always last.

a point which highlights the fact---if the loved does not change, but love dissipates, the love was a. some sort of emotional bias, or, b. one's own desires, particularly those fulfilled by the loved, have changed. either of which note that the 'loved' was not "loved" for what they themselves are of themselves separate from their relationship to the lover's desires. Conversely, if the loved does change---no longer embodying 'that which is loved'---with one's own desires (the things one loves) remaining the same, that object itself, formerly loved, no longer is.
 
I can well imagine a mother or father sacrificing themselves for their child / children. I'm having trouble imagining you sacrificing yourself for the sake of your data, Seditious? ;)

NM - got 10 minutes into that show and it sounded far too much like I was listening to you! :)
Telling me to not to listen to social dogma, but listen to her dogma instead...
 
Then, I would argue, they don't want to lose that person---I would risk my life to defend my computer and CD backups from being stolen or risk my life to save my PC/backups destroyed in a fire, because there is a lot content on it which is simply irreplaceable, and if I lost everything there I would have lost everything important to me that was really worth living for.---It's not that I love the content of my computer for it's own sake, merely that I would rather risk my life than be without risk of bodily injury and condemned to live having lost it.

If you're willing to die for liberty, etc. it's more to do with your not wanting to live without it---weighing up the harm of 'risking your life' against the harm of 'letting xyz be destroyed' and deciding what's most important to your selfish interests and following that. (especially in a world where parental obligation is actually a law, and heavily socially sanctioned, such that the culture would have a parent would feel sooo much guilt if they didn't risk their life to save their child, they might well rather die in hopeless attempt trying than face society for the rest of their life having done nothing. In a case like that, never even mind the 'decades of investment and attachment' at risk, but in something like a 'friend' or 'wife' etc. rather than child, I would think some personal rather than socially biased calculus like that would apply (though there is also the 'I better do something, because was it me I would hope someone would do the same' social aspect---participating in upholding 'compassionate' norms as perceived in one's own long term interests).)

No one has ever given me adequate reason to think 'persons' are distinguished from this pattern, and thus (as I actually told someone just the other day) I don't distinguish between 'valuing' and 'loving' (to which I met condescension rather than argument, their claiming my seeing through the popular delusion rather than just taking it for granted was actually that I've 'never really loved anyone' lol rather than that they never bothered to scratch the surface lest they find a less than admirable truth beneath).

Sacrificing oneself for another is quite possibly the clearest confirming instance of the theory 'X wants Y to be happy for Y's own sake.' And it's quite possibly, on the face of it, the clearest falsifying instance of the theory 'For every X, it is not the case that X wants Y to be happy for Y's own sake'. But what you're doing here is reinterpreting what clearly seems to be a falsification of the aforementioned theory just to make it fit your own theory. And now one wonders what would count as a counterexample to your theory. It doesn't seem like anything would! I suspect that no matter how one were to behave, you would just create some kind of ad hoc reinterpretation of the data and say "well, it's really just an instance of selfishness/self-interest/etc." But the fact that a theory is confirmed by/compatible with everything is not a virtue of a good theory. So what would you (if anything) consider to be a counterexample to your claim?
 
I'm having trouble imagining you sacrificing yourself for the sake of your data, Seditious? ;)

porn, albums, podcasts, I wouldn't risk myself for any of that... it's a very select content which I could never rebuild (research, remake, rewrite, etc.) which I would for.

Think Fahrenheit 451 :lol:
 
And now one wonders what would count as a counterexample to your theory. It doesn't seem like anything would!

I was thinking the same about yours... and you never even bother to defend that instance of 'X wants Y to be happy for Y's own sake'

I suspect that no matter how one were to behave, you would just create some kind of ad hoc reinterpretation of the data and say "well, it's really just an instance of selfishness/self-interest/etc."

It isn't necessarily ad hoc---let me talk to a person and I'll tell you what he'll die for before he does it.
 
I was thinking the same about yours... and you never even bother to defend that instance of 'X wants Y to be happy for Y's own sake'

Well then you're clearly not very imaginative. If I claimed that Smith wants Wilma to be happy for her own sake then there are a whole shitload of observations that would falsify said claim. The observation of Smith physically abusing Wilma is quite obviously a counterexample. On the other hand, suppose you said 'Smith wants Wilma to be happy, but not for her own sake', then when confronted with some kind of instance that would suggest otherwise, I suspect you'd construct some story out of thin air like you did in your other post just to save your claim. And it's not clear to me why your interpretation of the data is preferable to mine, or how this new claim you would construct is supposed to add predictive power or anything of that sort to your original theory. It just seems like a too convenient way to patch up a hole. And again I ask, what would you consider to be a counterexample to your theory?

It isn't necessarily ad hoc---let me talk to a person and I'll tell you what he'll die for before he does it.

I have no clue what you're talking about here.
 
porn, albums, podcasts, I wouldn't risk myself for any of that... it's a very select content which I could never rebuild (research, remake, rewrite, etc.) which I would for.

Think Fahrenheit 451 :lol:

You misinterpret! It's not 'risk' that is being discussed, it's 'sacrifice' - you die, to save x.
 
You misinterpret! It's not 'risk' that is being discussed, it's 'sacrifice' - you die, to save x.

again, I'd point to the social sanctions and culturally inspired emotions---if I felt I couldn't live with myself having done (or not done) something, I'd, in that self-interest, choose to die satisfied with/respecting myself instead.

If, for instance, I was given no choice but to rape a child or shoot myself, it doesn't need to be out of 'wanting the child to be happy for the child's sake' that I choose not to do it. And if I chose to shoot myself instead, and then along came someone and claimed I'd done it out of compassion that 'this is the clearest refutation of inescapable self-interest', unfortunately I'd be unable to correct them in their unproven assumption.
 
If I claimed that Smith wants Wilma to be happy for her own sake then there are a whole shitload of observations that would falsify said claim. The observation of Smith physically abusing Wilma is quite obviously a counterexample.
it would example either an inconsistancy with that desire of his (for her to be happy for herself) or perhaps just an ignorance on his part, like when someone gives 'tough love' thinking it will help someone but is really just hurting them. It doesn't necessarily example a lack of such desire---he could certainly still claim it's what he wants (as eating too many calories doesn't 'this person doesn't want to be thin', merely that they're presenting lacking the self-control to pursue that desire.).

On the other hand, suppose you said 'Smith wants Wilma to be happy, but not for her own sake', then when confronted with some kind of instance that would suggest otherwise,
such as? get specific here; give me a case that could actually provide some proof of it's existence.

And it's not clear to me why your interpretation of the data is preferable to mine,
it's not clear you have any data supporting 'y for y's sake' yet.

or how this new claim you would construct is supposed to add predictive power or anything of that sort to your original theory. It just seems like a too convenient way to patch up a hole. And again I ask, what would you consider to be a counterexample to your theory?
If my theory suggests that possibly there is no counterexample (or maybe I'm just not that imaginative) I'm not sure what you want me to provide----a lack of counter example to the theory of gravitation or evolution isn't sufficient to say 'therefore it's a bad theory' because the truth of the theory suggests a lack of such.

Sufficient for now that you lack any example in support of the contradicting theory. Show me something which doesn't fit, show me something that suggests the only motive was 'for y's sake'---that would be a counterexample, obviously.

I have no clue what you're talking about here.
you claimed it was merely an after-the-fact assessment... so I pointed out, if I knew the person well enough I could hypothesize his actions in future situations.
 
It seems to me that much of the above can be avoided by acknowledging a divided subject. So long as consciousness is treated (intentionally or not) as a singularity, ricocheting amongst binary oppositions, little can be learned.

One must also admit of the sway of inner-representation--analogous to Kantian difference of appearance and "thing-in-itself". This greatly complicates, for instance, Seditious' example of public shame; the values, and thereby shame, are largely internalized by the subject! It may not be a case at all of public shame as such, but the inner-representation of "public" shame instantiated by the subject.

With the above in mind, I see no great difficulty in saying that it is quite possible for "one" to wish another to be happy "for their own sake"; meaning happiness as an other, by recognizing an other, not merely as a figure of inner-representation.

To interject that the horizon of thought is the thinking thing, and thus that wishing for others' happiness is grounded in, and in some sense "for" the thinking subject's "happiness" is to greatly confuse the meaning and operation of this "for".
 
I see no great difficulty in saying that it is quite possible for "one" to wish another to be happy "for their own sake"; meaning happiness as an other, by recognizing an other, not merely as a figure of inner-representation.

you'll have to elucidate for me what "meaning happiness as an other, by recognizing an other, not merely as a figure of inner-representation" means, because I don't think that's relevant.

It reminds me a lot of the old 'you have to dehumanize someone before you can act inhumanely unto them---create an 'us and them' mentality, and consider them less than human so that you can rationalize treating them worse than other humans' morality nonsense, and I'm not sure if that's the sort of thing you're intending.
 
A psychopathic personality would never comprehend the notion of loving someone selflessly or wishing to make another happy for that other person's sake. No amount of explaining would break through that incomprehension.
 
I can well imagine a mother or father sacrificing themselves for their child / children. I'm having trouble imagining you sacrificing yourself for the sake of your data, Seditious? ;)

NM - got 10 minutes into that show and it sounded far too much like I was listening to you! :)
Telling me to not to listen to social dogma, but listen to her dogma instead...

You could do a lot worse than be guided by me Blotus :saint:
 
It reminds me a lot of the old 'you have to dehumanize someone before you can act inhumanely unto them---create an 'us and them' mentality, and consider them less than human so that you can rationalize treating them worse than other humans' morality nonsense, and I'm not sure if that's the sort of thing you're intending.

That's not at all what I intended. I am not interested in discussing morality.

you'll have to elucidate for me what "meaning happiness as an other, by recognizing an other, not merely as a figure of inner-representation" means, because I don't think that's relevant.

To wit, that a person recognize the difference between their inner-representation of another and the other as such (In non-"psychological" Kantian terms, this distinction is analogous to appearance and Ding an sich ("thing-in-itself").

It is relevant to this thread because a concern for an other "in-itself" is radically different than a concern merely for the inner-representation of the other. The former opens a possibility for "love", the latter manifests the type of purely utilitarian subjectivism you detail.
 
it would example either an inconsistancy with that desire of his (for her to be happy for herself) or perhaps just an ignorance on his part, like when someone gives 'tough love' thinking it will help someone but is really just hurting them. It doesn't necessarily example a lack of such desire---he could certainly still claim it's what he wants (as eating too many calories doesn't 'this person doesn't want to be thin', merely that they're presenting lacking the self-control to pursue that desire.).

Fair enough. Sure we can avoid an instance of falsification by adjusting assumptions elswhere as you've demonstrated, and we can do that with just about any theory. But for you to make the specific moves you've just made here, it looks like you'd have to admit that Smith can genuinely have a psychological state of desiring for Wilma to be happy for her own sake. But of course that wouldn't be good for you, because then it would be plausible to maintain that there are ordinary cases in which a person can act in accordance with that desire, just like people can act on their desires to, e.g. eat (by eating, that is).

such as? get specific here; give me a case that could actually provide some proof of it's existence.

I don't think this has necessarily been a matter of me proving my position. Rather, what I've taken issue with is that the theory you prefer (and the more general one that nobody ever does anything that is not ultimately in their own self-interest) seems to be framed in such a way that no conceivable behavior of any person could refute it. I already indicated that I felt that an instance of someone foregoing their own desires for the sake of someone else's (e.g. dying for somebody) at least constitutes some reason to doubt the idea that nobody ever does anything not in accordance with their own self-interest. I don't know why you keep going on about me proving my position as if you've already provided sufficient evidence for the truth of your own. What evidence have you mustered in your defense? You've merely reinterpreted a piece of data by making a bunch of claims about what really underlies such data. But I don't see that you've given any reason to think said reinterpretation is justified, so I don't understand why you think my position is lacking anything in the context of this debate.

If my theory suggests that possibly there is no counterexample (or maybe I'm just not that imaginative) I'm not sure what you want me to provide----a lack of counter example to the theory of gravitation or evolution isn't sufficient to say 'therefore it's a bad theory' because the truth of the theory suggests a lack of such.

You're misunderstanding me slightly. The point is not that your theory has never had any counterexamples. In fact, if a theory is constantly put to the test and no counterexamples to it have been found, then that is good for the theory. The issue is one about what a theory asserts. The theory of gravitation rules certain observations out. That is, the theory of gravitation is only consistent with a subset of all possible observations. It is not consistent with everything we might possibly observe. A competent scientist (or possibly anybody that's scientifically literate) will be able to tell you what a counterexample to the theory of gravitation might be. This theory of yours (and the more general one I mentioned earlier) seem to be framed in such a way that said theory is consistent with all conceivable observations. Hence, it's not even testable. That's why I'd like to know what sort of observations do you think the theory is incompatible with? What behavioral or psychological data would force abandonment of this theory? When I've seen this sort of idea propounded in the past, I've never been given an answer to this question.

Sufficient for now that you lack any example in support of the contradicting theory. Show me something which doesn't fit, show me something that suggests the only motive was 'for y's sake'---that would be a counterexample, obviously.

What do you want? A specific case from real life that would demonstrate the truth of my position or would you rather me sit in my chair and think really hard about how to construe some behavior type as being inextricably for the sake of somebody else? I can't do the former and I refuse to do the latter because I don't even disagree with you in the sense that I think it is possible for there to be the appearance of X doing A for the sake of Y even when X is really not doing A for the sake of Y. What I disagree with is the idea that all such appearances are ultimately of this nature. How is that position justified? The argument you gave for it was merely you sitting in front of your computer and reading some certain psychological state into an agent in some hypothetical situation. How is that proof of your position?