brap
Mar 20, 09:54 PM
It's more than a copyright/fair use issue.
...
You AGREED not to bypass or attempt to circumvent DRM, not to redistribute the files in any unauthorized manner, and to use iTunes alone to interface with the iTMS. And not just agreed passively, but EXPLICITLY agreed to those terms, and now you are breaking your word. How is that not morally wrong?
...
<snip>
I do agree that it is effectively the break of a promise. Hell, it's the breaking of a contract... which is certainly quite wrong. But what if you believe the original terms and conditions to be morally wrong in themselves?
Yes, yes, I know. Don't use the software, but people do, and people will. In the scheme of things, considering all alternatives, I really can't see such strong objection. For reasons noted in my first post, the software will likely only be picked up by a small number of tech-savvy, yet honest users - and that's the thing. This is a very small market, quite unlikely to be distributing these songs over p2p - which is (correct me if I'm wrong) the main reason for DRM in the first place?
Trying to stay pragmatic here without advocating anarchy. It's not working.
...
You AGREED not to bypass or attempt to circumvent DRM, not to redistribute the files in any unauthorized manner, and to use iTunes alone to interface with the iTMS. And not just agreed passively, but EXPLICITLY agreed to those terms, and now you are breaking your word. How is that not morally wrong?
...
<snip>
I do agree that it is effectively the break of a promise. Hell, it's the breaking of a contract... which is certainly quite wrong. But what if you believe the original terms and conditions to be morally wrong in themselves?
Yes, yes, I know. Don't use the software, but people do, and people will. In the scheme of things, considering all alternatives, I really can't see such strong objection. For reasons noted in my first post, the software will likely only be picked up by a small number of tech-savvy, yet honest users - and that's the thing. This is a very small market, quite unlikely to be distributing these songs over p2p - which is (correct me if I'm wrong) the main reason for DRM in the first place?
Trying to stay pragmatic here without advocating anarchy. It's not working.
job
Jun 3, 11:42 PM
A short, random quiz for all the old skoolers here. Feel free to add your questions.
Oh, and if you know the answer, don't post it.
We don't want to ruin it for the newer members; use PMs instead. :)
1. What was Durandal7's original user name?
2. What does "QC" stand for?
3. What was jefhatfield's original user name?
4. How many original forums were there?
5. Who is the other admin of Macrumors?
Oh, and if you know the answer, don't post it.
We don't want to ruin it for the newer members; use PMs instead. :)
1. What was Durandal7's original user name?
2. What does "QC" stand for?
3. What was jefhatfield's original user name?
4. How many original forums were there?
5. Who is the other admin of Macrumors?
dpaanlka
May 7, 06:54 PM
Cause the Origami sounds like a great idea, a mix between a PDA and a laptop
The point of a PDA is to have a stripped down personal computer you can fit in your pocket.
The origami is like a huge PDA, with a few more features than a PDA, but still fewer features than a full laptop. What's the point? If you already can't fit the Origami in your pocket, why not just go ahead and buy a normal laptop that can do more?
The point of a PDA is to have a stripped down personal computer you can fit in your pocket.
The origami is like a huge PDA, with a few more features than a PDA, but still fewer features than a full laptop. What's the point? If you already can't fit the Origami in your pocket, why not just go ahead and buy a normal laptop that can do more?
acedickson
Mar 16, 01:34 PM
Before I just bought my first Mac, this old G3, my only experience with a Mac was back in elementary school. We played Oregon Trail on Macs back in the early 90's. I loved that game! It's a classic. Is it still as good now as it was back then?
I hope it has the staying power of a Mario, from Nintendo, back in the day!
I hope it has the staying power of a Mario, from Nintendo, back in the day!
Rajj
Sep 19, 02:40 AM
Originally posted by Nipsy
So, unless you wear D-batteries for earrings, where are the headphones going to get their power?
Also, the iPod's battery life has been crippled by a CLOCK, what do you think a constant stream of broadcasting is gonna do?
The same way the Bluetooth Headset HBH-15 from sonyericsson is powered;)
What "clock" are you talking about?
So, unless you wear D-batteries for earrings, where are the headphones going to get their power?
Also, the iPod's battery life has been crippled by a CLOCK, what do you think a constant stream of broadcasting is gonna do?
The same way the Bluetooth Headset HBH-15 from sonyericsson is powered;)
What "clock" are you talking about?
kwajo.com
Nov 13, 05:36 PM
wow, this is a great project guys! :) I may be 1067th right now but with a couple units a day I should be moving up fast :D
jelloshotsrule
Sep 10, 04:57 PM
reason so many registered but didn't post is because it was around then that arn made the forums registered members only. so people registered (after all, it's free) to see the news, but didn't care to post...... not too surprising
true though, was an assload signing up at that time.
true though, was an assload signing up at that time.
MacCoaster
Oct 17, 02:30 PM
...AMD is bringing the Hammer out early beating PowerPC 970 by several months, and they are highly likely to scale much further by the time PowerPC 970 finally comes out.
but:
AMD 64bit CPU 2Ghz
SPECint2000 score of 1202
SPECfp2000 score of 1170
NEW IBM PPC (Power4 Lite) 1.8Ghz
Specint2000 score of 937
Specfp2000 score of 1051
Since when was 1051 ever higher than 1170. Same is true for 937. 937 isn't higher than 1202. So technically, the AMD is beating the PowerPC 970. Who cares if it's at a higher clock. 200 MHz, yay! That doesn't mean the AMD is less efficient... from the looks of it, the AMD is just as efficient, just coming out earlier.
but:
AMD 64bit CPU 2Ghz
SPECint2000 score of 1202
SPECfp2000 score of 1170
NEW IBM PPC (Power4 Lite) 1.8Ghz
Specint2000 score of 937
Specfp2000 score of 1051
Since when was 1051 ever higher than 1170. Same is true for 937. 937 isn't higher than 1202. So technically, the AMD is beating the PowerPC 970. Who cares if it's at a higher clock. 200 MHz, yay! That doesn't mean the AMD is less efficient... from the looks of it, the AMD is just as efficient, just coming out earlier.
peterjhill
Sep 6, 07:02 AM
I like my Tibook, but I agree that the time has come for more radical changes to the machine. The light grey paint around the edge is terrible. Why can't they just leave the Titanium unfinished? It would certainally help with heat disappation.
I would like them to stay with Titanium. If they switched, I want it to be as durable, strong, and light, as my current machine.
I would like them to stay with Titanium. If they switched, I want it to be as durable, strong, and light, as my current machine.
tjcampbell
Mar 22, 06:05 AM
I love my 360, so I'd say buy it now if you can afford it. You can always trade up down the road if you are interested in the black edition. Cheers, T
bousozoku
Jan 22, 12:06 PM
OK, thanks.
One more question: What makes mc68K better than the text console from Stanford?
Ummm, my code to help you figure out where the progress is? :D :p
Stanford doesn't have any scripts to make it easy to use. You have to start folding@home each time you log in whereas mc68k's setup puts it in the crontab so it will start as an automatic process and you don't have to mess with it all. If you have dual processors, it's not any more difficult with the scripts but Stanford makes you decide how to do it. For the technically-inclined, it's not difficult but a bit tedious.
One more question: What makes mc68K better than the text console from Stanford?
Ummm, my code to help you figure out where the progress is? :D :p
Stanford doesn't have any scripts to make it easy to use. You have to start folding@home each time you log in whereas mc68k's setup puts it in the crontab so it will start as an automatic process and you don't have to mess with it all. If you have dual processors, it's not any more difficult with the scripts but Stanford makes you decide how to do it. For the technically-inclined, it's not difficult but a bit tedious.
mc68k
Aug 16, 11:11 AM
i hope stanford makes a version that sets itself up as dual in the future. it seems long overdue
there's a field in the rc5 client that says number of processors and u just change the value....seems like stanford isn't too concerned about this
there's a field in the rc5 client that says number of processors and u just change the value....seems like stanford isn't too concerned about this
Kid Red
Sep 5, 10:49 AM
Originally posted by goobus
Even though i would liek apple to keep the current ti design this patent i doubt is much more then 5-6 months old caus ei fu look at teh filing date that means when apple applied for this patent. So it takes quite a while to get a patent to tehy must have jstu got it and i agree with who ever sadi before that the case design was vague adn most likely a genrec drawing.
wow, normally I don't comment on people's typing and you make some of the exact same typing errors that I do, but please read over you post before hitting reply. It seems one hand is faster then the other, knowing that, slow down and check your typing, makes it much easier to read :)
Even though i would liek apple to keep the current ti design this patent i doubt is much more then 5-6 months old caus ei fu look at teh filing date that means when apple applied for this patent. So it takes quite a while to get a patent to tehy must have jstu got it and i agree with who ever sadi before that the case design was vague adn most likely a genrec drawing.
wow, normally I don't comment on people's typing and you make some of the exact same typing errors that I do, but please read over you post before hitting reply. It seems one hand is faster then the other, knowing that, slow down and check your typing, makes it much easier to read :)
ddtlm
Oct 7, 03:10 PM
Backtothemac:
Um, Don't know what chart you were looking at, but with both processors being used, the 1.25 kicked the "snot" out of the PC's.
Ohhh, you mean that one test where the Mac beat an old dual Athlon by, look, 2 points? 38/40 hardly matters, especially seeing as how Athlon MP's are available at 1.8ghz rather than the 1.6ghz tested. Xeons are available at up to 2.8ghz if you want a real top of the line SMP PC. How do you suppose the dual 1.25 would do against that sort of competition?
Um, Don't know what chart you were looking at, but with both processors being used, the 1.25 kicked the "snot" out of the PC's.
Ohhh, you mean that one test where the Mac beat an old dual Athlon by, look, 2 points? 38/40 hardly matters, especially seeing as how Athlon MP's are available at 1.8ghz rather than the 1.6ghz tested. Xeons are available at up to 2.8ghz if you want a real top of the line SMP PC. How do you suppose the dual 1.25 would do against that sort of competition?
JLS
Sep 27, 02:45 PM
wasn't adobe getting sued by the owners of "jpeg" or something?! yes it's a good idea anyway, but sounds to me like they are trying to get away from dealing with "jpeg" issues.
Other way round... the jpeg group tried to sue Adobe.
Other way round... the jpeg group tried to sue Adobe.
G4scott
Oct 14, 09:24 AM
I just got my iPod 5 minutes ago!!! I was woken up by the doorbell, and it was the fed-ex guy!
I'm so happy!!!
I got the 10gb iPod. If you're looking to get the 5gb iPod, but think you can save up more for the 10, go for the 10. If you need the 20, than go for that one, but I really don't need if (for now, at least :D)
I love the little iPod's... I love 'em good...
I'm so happy!!!
I got the 10gb iPod. If you're looking to get the 5gb iPod, but think you can save up more for the 10, go for the 10. If you need the 20, than go for that one, but I really don't need if (for now, at least :D)
I love the little iPod's... I love 'em good...
MrMacMan
Jul 17, 05:14 PM
Originally posted by vniow
I've still got my blueberry 300 iBook.
*sticks tongue out*
Haha, I can do one better.
Bondi Blue iMac 233 Baby!
I've still got my blueberry 300 iBook.
*sticks tongue out*
Haha, I can do one better.
Bondi Blue iMac 233 Baby!
shadowfax0
Oct 20, 10:13 AM
http://www.cs.nmsu.edu/~tepezulin/rs64/intro.html
arn
Sep 2, 12:34 AM
Originally posted by rice_web
But, here's the great part of this entire thing. All of these computers would use the same exact processor, with the iBook receiving an underclocked 1GHz 750FXe.
I bet that a 800mhz chip is cheaper than a 1ghz chip underclocked to 800mhz regardless of any bulk savings we're talking about...
arn
But, here's the great part of this entire thing. All of these computers would use the same exact processor, with the iBook receiving an underclocked 1GHz 750FXe.
I bet that a 800mhz chip is cheaper than a 1ghz chip underclocked to 800mhz regardless of any bulk savings we're talking about...
arn
nixd2001
Sep 14, 07:48 PM
Originally posted by onemoof
Someone asked the difference between RISC and CISC.
First thing, there isn't that distinction anymore. RISC originally meant that the processor had fixed width instructions (so it wouldn't have to waste time asking the software how big the next instruction will be). CISC mean that the processor had variable width instructions (meaning time would have to be taken to figure out how long the next instruction is before fetching it.) However, Intel has addressed this problem by making it possible for the processor to switch to a fixed-width mode for special processor intensive purposes. The PowerPC is stuck with fixed-width and has no ability to enjoy the flexibility of variable-width instructions for non-processor-intensive tasks. This means that CISC is now better than RISC. (Using the terms to loosely define Pentium as CISC and PowerPC as RISC.)
Originally it was Reduced versus Complex instruction set computer. Making simpler processors go faster is generally easier than making complex processors go faster as there is less internal state/logic to synchronise and keep track of. For any given fabrication technology, this still generally holds true. Intel managed to sidestep this principle by investing massive sums in their fab plants, effectively meaning that the fab processes being compared weren't the same.
The opposite end of the spectrum from RISC is arguably the VAX line. With this instruction set, massive complexities arose from the fact that a single instruction took so long and did so much. It was possible for timers, interrupts and "page faults" to occur midway during an instruction. This required saving a lot of internal state so that it could later be restored. There were examples of performing a given operation with a single instruction or a sequence of instructions that performed the same effect, but where the sequence achieved the join quicker because the internal implementation within the processor was able to get on with the job quicker because it was actually a simpler task being asked of it.
The idea of fixed sized instructions isn't directly coupled to the original notion of RISC, although it is only one step behind. One of the basic ideas with the original RISC processors was that an instruction should only take a single cycle to complete. So a 100MHz CPU might actually achieve 100M instructions per second. (This was often not achieved due to memory latencies, but this isn't the "fault" of the processor core). In this context, having a variable length instruction means that it is easy for the instruction decoding (especially if it requires more than one "word") to require for effort than any other aspect of executing an instruction.
There are situations where a variable width instruction might have advantages, but the argument goes that breaking the overall task down into equal sized instructions means that fetching (including caching, branch predicting, ec) and decoding these instructions becomes simpler, permitting optimisations and speed gains to be made elsewhere in the processor design.
Intel blur RISC and CISC into gray by effectively executing RISC instructions internally, even if they support the apparent decoding of CISC insructions. They only do this for legacy reasons.
Apple will never switch to IA32 (Pentium) because 32 bit processors are a dead-end and maybe have a couple years left. The reason is because they can only have a maximum of 4 GB of RAM [ (2^32)/(1 Billion) = 4.29 GB ]. This limit is very close to being reached in current desktop computers. Apple MAY at some point decide to jump to IA64 in my opinion, and I think they should. Obviously the Intel family of processors is unbeatable unless they have some sort of catastrophe happen to them. If Apple jumped on they'd be back on track. Unfortunately I don't believe IA64 is yet cheap enough for desktops.
I think this "unbeatable" assertion requires some qualification. It may be that Intel will achieve the best price/performance ratio within a suitable range of qualifications, but this is different from always achieving best p/p ratio whatever. Indeed, IA64 versus Power4 is going to be an interesting battle because Intel has bet on ILP (instruction level parallelism) whereas IBM has bet on data bandwidth. Ultimately (and today!), I think IBM's bet has more going for it. But that's if you want ultimate performance. The PC space is often characterised by people apparenntly wanting ultimate performance but actually always massively qualifiying it with severe price restrictions (such as less than 5 digits to the price).
Someone asked the difference between RISC and CISC.
First thing, there isn't that distinction anymore. RISC originally meant that the processor had fixed width instructions (so it wouldn't have to waste time asking the software how big the next instruction will be). CISC mean that the processor had variable width instructions (meaning time would have to be taken to figure out how long the next instruction is before fetching it.) However, Intel has addressed this problem by making it possible for the processor to switch to a fixed-width mode for special processor intensive purposes. The PowerPC is stuck with fixed-width and has no ability to enjoy the flexibility of variable-width instructions for non-processor-intensive tasks. This means that CISC is now better than RISC. (Using the terms to loosely define Pentium as CISC and PowerPC as RISC.)
Originally it was Reduced versus Complex instruction set computer. Making simpler processors go faster is generally easier than making complex processors go faster as there is less internal state/logic to synchronise and keep track of. For any given fabrication technology, this still generally holds true. Intel managed to sidestep this principle by investing massive sums in their fab plants, effectively meaning that the fab processes being compared weren't the same.
The opposite end of the spectrum from RISC is arguably the VAX line. With this instruction set, massive complexities arose from the fact that a single instruction took so long and did so much. It was possible for timers, interrupts and "page faults" to occur midway during an instruction. This required saving a lot of internal state so that it could later be restored. There were examples of performing a given operation with a single instruction or a sequence of instructions that performed the same effect, but where the sequence achieved the join quicker because the internal implementation within the processor was able to get on with the job quicker because it was actually a simpler task being asked of it.
The idea of fixed sized instructions isn't directly coupled to the original notion of RISC, although it is only one step behind. One of the basic ideas with the original RISC processors was that an instruction should only take a single cycle to complete. So a 100MHz CPU might actually achieve 100M instructions per second. (This was often not achieved due to memory latencies, but this isn't the "fault" of the processor core). In this context, having a variable length instruction means that it is easy for the instruction decoding (especially if it requires more than one "word") to require for effort than any other aspect of executing an instruction.
There are situations where a variable width instruction might have advantages, but the argument goes that breaking the overall task down into equal sized instructions means that fetching (including caching, branch predicting, ec) and decoding these instructions becomes simpler, permitting optimisations and speed gains to be made elsewhere in the processor design.
Intel blur RISC and CISC into gray by effectively executing RISC instructions internally, even if they support the apparent decoding of CISC insructions. They only do this for legacy reasons.
Apple will never switch to IA32 (Pentium) because 32 bit processors are a dead-end and maybe have a couple years left. The reason is because they can only have a maximum of 4 GB of RAM [ (2^32)/(1 Billion) = 4.29 GB ]. This limit is very close to being reached in current desktop computers. Apple MAY at some point decide to jump to IA64 in my opinion, and I think they should. Obviously the Intel family of processors is unbeatable unless they have some sort of catastrophe happen to them. If Apple jumped on they'd be back on track. Unfortunately I don't believe IA64 is yet cheap enough for desktops.
I think this "unbeatable" assertion requires some qualification. It may be that Intel will achieve the best price/performance ratio within a suitable range of qualifications, but this is different from always achieving best p/p ratio whatever. Indeed, IA64 versus Power4 is going to be an interesting battle because Intel has bet on ILP (instruction level parallelism) whereas IBM has bet on data bandwidth. Ultimately (and today!), I think IBM's bet has more going for it. But that's if you want ultimate performance. The PC space is often characterised by people apparenntly wanting ultimate performance but actually always massively qualifiying it with severe price restrictions (such as less than 5 digits to the price).
meeble
Jun 24, 01:54 AM
Well, it looks like we've fended of the Knights for now.
last time i checked they were going to overtake us in only 60 days, right now it will take them 637 days!
good to hear! :D
I still haven't got my new systems folding yet - hopefully in the next couple weeks they will join in for team 3446...
peAce-
meeble
last time i checked they were going to overtake us in only 60 days, right now it will take them 637 days!
good to hear! :D
I still haven't got my new systems folding yet - hopefully in the next couple weeks they will join in for team 3446...
peAce-
meeble
Markleshark
Mar 20, 03:29 AM
Yup, BF everyday this week.
applemacdude
Sep 6, 08:06 PM
how much u wanna bet?
mrsebastian
Apr 5, 12:14 PM
forget the usual id vs quark battle, but i find it very interesting that adobe cs2 is soon shipping and quark has this promotion at the same time. i think part of the reason id has picked up a lot of users as well, is the price. these prices are from the manufactures sites and are full retail: $899 for the standard edition cs2 and $949 for quark 6 (free upgrade to 6.5). we can argue features and nit pick which program does this or that better, but in the end cs2 is a much better deal, as you also get photoshop, illustrator, and then some. on that note, if you were to get id by itself, it's still cheaper than quark at $699. the premium edition of cs2 is $1199, you the get all the software you need for pretty much everything relating to graphics, from print to web and then some and with a lil creativity you can get it for the standard price.
why pay more money for quark, when you can get id, which is equally good (this is a price argument, so we'll call them equally good), as well as photoshop and illustrator that every graphics person purchases as well?!... oh yeah, even the upgrade price is better cs2 (standard) at $349, quark $499 if upgrading from version 4 which most people have, since 5 went nowhere. lastly, if you do some searching and are diligent, you can find adobe cs1 for around $250 and upgrade to (premium) cs2 for $549, which brings you to $799 and that is still cheaper than you know who... uh quark, i think you lost the price battle as well. if you could gather your things and just go ahead and leave then, that would be great :D
why pay more money for quark, when you can get id, which is equally good (this is a price argument, so we'll call them equally good), as well as photoshop and illustrator that every graphics person purchases as well?!... oh yeah, even the upgrade price is better cs2 (standard) at $349, quark $499 if upgrading from version 4 which most people have, since 5 went nowhere. lastly, if you do some searching and are diligent, you can find adobe cs1 for around $250 and upgrade to (premium) cs2 for $549, which brings you to $799 and that is still cheaper than you know who... uh quark, i think you lost the price battle as well. if you could gather your things and just go ahead and leave then, that would be great :D