Press J to jump to the feed. Press question mark to learn the rest of the keyboard shortcuts
View
Sort
Coming soon

The pixel art of indie games never ceases to amaze me nowadays, those were some silky smooth animations going on.

see more

That trailer had an amazing vibe, but I think "silky smooth" should be reserved for stuff like this or this. Pixel art games are never smooth :-(

11 points · 2 days ago · edited 2 days ago

Check out ambigrams, for example "bid" is a mirror ambigram, "BID" is an up-down mirror ambigram, "pod" is a rotational ambigram, and "()()" is all of those.

Original Poster2 points · 3 days ago

right way to use Aumann agreement

I don't think there is one, frankly I think we're far enough from the ideal scenario of the theorem that at best it reduces down to aphorisms like "consider that the other person might know something you don't".

For example, you've established that you need to use it with people outside your group as well, how are you going to choose these people? How are you going to estimate and quantify peoples irrationality? It seems like this just ends with you picking more and more numbers semi-abitrarily, each of which gives more opportunities for you to introduce your own biases into the equations. You might as well just say you guessed and be honest about it.

see more
2 points · 3 days ago · edited 3 days ago

I think you can do better than guessing. For example, here's a ladder of methods to figure out if strong AI is near or far:

1) Guess

2) Read a journalist piece

3) Read an interview with an AI/ML researcher

4) Read a survey of participants at leading AI/ML conferences

5) Read current and past surveys of experts, check past predictions against reality, get a rough optimism factor, and apply it to current surveys

The later methods look more and more like estimating rationality and then updating. Does that make sense?

Original Poster1 point · 3 days ago

If you're saying that we should factor in the opinions of experts with years of domain specific expertise, I strongly agree with you. I think trying to deviate from the experts will often just introduce your own biases.

I'm just not clear what all the stuff about aumann's theorem actually contributes here? I don't know anything about the priors of AI experts, and I doubt many of them are bayesian reasoners. The relevant thing here seems to be the domain-expertise, not the "rationality".

see more
1 point · 3 days ago · edited 3 days ago

Theologians have tons of domain expertise, but that doesn't make me trust them that God exists. And even in more objective fields, experts sometimes disagree, so you need to evaluate their rationality somehow. This is one of the founding problems of LW.

Maybe there's a bit of missing mood here. I'm the kind of person who sees unsolved questions everywhere. For example, consider this dialogue:

Alice: Surprise means seeing an event with low probability.

Bob: Really? Today I saw a car whose license plate said 7421 and that didn't surprise me at all.

To me this dialogue shows that the nature of surprise is non-trivial and worth figuring out. I feel the same way about the nature of trust, the nature of hope, and many other things. Aumann's theorem is like a compass needle that points me toward the nature of trust. That's why it's valuable.

Load more comments

The article says C isn't a good low-level language for today's CPUs, then proposes a different way to build CPUs and languages. But what about the missing step in between: is there a good low-level language for today's CPUs?

This is a recurring and interesting issue, that gets a lot of disagreement and controversy among progressives when it's addressed. Who has responsibility for ensuring that someone is educated enough to understand a statement? The listener or the speaker?

I think the answer is that there's not a clear and universal binary. Educating other people is a good thing to do, but no one is required to do it all the time at every moment. Educating yourself is a good thing to do, but no one is required to remain mute and outside the conversation until they have a phD on every subject that could conceivably come up.

It may be that phrasing this as something like a 'burden' or 'duty' is the wrong way to think about this, and 'common understanding of concepts and terms' should just be thought of as a general social good that everyone has some communal duty to maintain but isn't required to devote their life to, like freedom of speech or civility norms or etc.

I also think the idea of intended audience is important here, which you sort of get at with your point about power and reach. I think it's just true that there are cultural spaces where you expect most people to have common context for ideas and language, and I think it's just true that speakers have intended audiences in mind when they speak. Maybe it's the speaker's duty to ensure that what they say is intelligible to their intended audience and in their intended space, and if a listener from outside that audience or space wants to respond, maybe it's that listener's duty to educate themself first.

I don't think it's fair to link this directly to power and reach per se, because the more reach you have, the more impossible it is to educate everyone you could potentially reach - you could hold a seminar on all the terms and ideas you want to use 6 days a week, and someone will stumble upon your twitter feed on the 7th day demand to know what kind of gibberish nonsense you're talking about. I also don't want the most powerful cultural sources to be artificially limited to only using ideas and terms that everyone already knows, as it simplifies and infantilizes the discussion and precludes them form making many types of useful contributions of ideas - most popular media outlets already write at a 3rd grade reading level, and I do think that has a negative effect on public discourse.

see more
13 points · 10 days ago · edited 9 days ago

This is the divide between high-context cultures, where the listener is responsible for understanding, and low-context cultures, where the speaker is responsible for being understood. Wikipedia has a nice list:

Higher-context culture: Afghans, African, Arabic, Brazilian, Chinese, Filipinos, French Canadian, French, Greek, Hawaiian, Hungarian, Indian, Indonesian, Italian, Irish, Japanese, Korean, Latin Americans, Nepali, Pakistani, Persian, Portuguese, Russian, Southern United States, Spanish, Thai, Turkish, Vietnamese, South Slavic, West Slavic.

Lower-context culture: Australian, Dutch, English Canadian, English, Finnish, German, Israeli, New Zealand, Scandinavia, Switzerland, United States.

There seems to be a correlation between high-context and authoritarianism. My tentative explanation is that in a high-context culture, if the ruler says something and you question it, others think you're dumb for not understanding. While in low-context cultures, people feel free to question and criticize what rulers say, leading to a more egalitarian society.

14 points · 13 days ago · edited 12 days ago

Look at surveys of AI researchers, like this or this. The median response is that AI x-risk is >5%, coming in the next 50 years, and gets too little investment.

If you're not an expert, I think this approach gets you closer to truth than listening to journalists, public figures, or even individual experts.

That seems to answer a question the OP didn't ask, and dodge the questions they did ask. Most respondents seem to think that it's possible. They don't seem to discuss the risk much. Though I may be misreading the paper.

see more

They did ask about risk, search for "Russell" in the second paper. But maybe I'm misunderstanding your objection.

Load more comments

Their solution to writing a comic about a character with no character? Make him say everything he thinks. Also make him a lunatic

see more

You will love this

I mean specifically the singularitarian version where we're going to create immortality by transferring minds from bodies to computers. Massimo Pigliucci has covered this.

https://philpapers.org/rec/PIGMUA

https://www.youtube.com/watch?v=onvAl4SQ5-Q

see more
3 points · 19 days ago · edited 19 days ago

I think that paper by Pigliucci is pretty weak. We haven't found any uncomputable physics, so simulating a brain atom-by-atom is most likely possible in theory (though galactically expensive). The resulting thing can send motor signals that the simulation runner can detect, including the signal "move my lips to say I'm conscious". If you believe it's not conscious anyway, you must believe in p-zombies, which is a huge bullet to bite.

3 points · 18 days ago

Have you seen Aaronson's The Ghost in the Quantum Turing Machine? It gives an example of a possible physics of the brain which (1) is consistent with known physics, and (2) is inconsistent with mind scanning (not for computational reasons, but for informational ones: just like you can't know a particle's location and velocity at once, perhaps it is not possible measure all the relevant properties of a brain if quantum states are involved).

see more
2 points · 18 days ago · edited 17 days ago

Yeah, I've seen it. My comment was more about Pigliucci's view that consciousness might require a particular substrate (carbon but not silicon), which doesn't make sense to me. You're talking about transfer of consciousness from one substrate to another, which I'm less certain about. That said, the brain seems classical enough so far.

Is uint64_t the fastest choice for the C implementation?

Can someone help me with this problem. How can I determine the odds of one random number being greater than another if they come from different ranges.

Say you have 11.7-21.3 and 14.9-22.7. And you pick a random value between each of those ranges. What are the odds that the value from the first range is greater than the value from the second and vice versa? What formula can I use?

see more
1 point · 22 days ago · edited 21 days ago

Let's say the ranges are (a,b) and (c,d) and they have nonempty overlap (p,q). Do a case analysis: the first number can be in (a,p) or (p,q) or (q,b) and the second number can be in (c,p) or (p,q) or (q,d). That's nine cases in total. But some cases are impossible, for example you can't have the first number be in (a,p) and the second be in (b,p) because at least one of those has zero length. In each of the remaining cases, the probability of first number winning is either 0 or 1, except the case where both numbers fall in (p,q) and the probability is 1/2. The probability of each case is also easy to compute.

Let f be a differentiable function such that f' is continuous on [0, 1] and, M is the maximum value of |f'(x)| on [0,1]. Prove that if f(0) = f(1) = 0, then

the definite integral of |f(x)| from 0 to 1 <= M/4.

I am stucked on this question for a long time. The strongest upper bound which I can prove is M/2. I did so by applying the mean value theorem to bound |f(x)|. Is there any way to solve this problem?

see more
13 points · 22 days ago · edited 22 days ago

When I think about it geometrically, the first idea that comes to mind is that the graph of f lies in the rhombus (0,0) (1/2,M/2) (1,0) (1/2,-M/2). So |f| lies in the upper half of the rhombus, which is a triangle with area M/4. In fact I think the inequality is strict, because the only way to achieve the maximum is to hit the apex of the triangle, making f' discontinuous.

I don't see how crypto anarchy could respond to a law requiring hardware makers to put keyloggers in hardware, or requiring anyone who distributes encryption software to provide decryption keys to the government, or a host of other possible laws, some of which can be hidden from the public without losing their effectiveness.

Is there a visual tool for creating interactive gamey things that compile to WASM, like the Flash IDE used to be?

Unity Engine, Godot, and Unreal can all produce html5+webassembly builds. Pretty much any desktop game engine or interactive toolkit can be ported to javascript and webassembly pretty easily. Even before webassembly began to take over the asm.js subset, it only took Mozilla and epic about 4 days to get Unreal 3 to run on the browser.

see more

Very nice, thanks!

That's the most clearly written, intellectually honest, rational thing on that whole bloody website.

see more
6 points · 26 days ago · edited 26 days ago

He changed his mind a few years later.

3 points · 28 days ago · edited 28 days ago

Correct deductions, not at all. But formed my experience at least as much as any reading I've done? Absolutely. I see it in myself with a bunch of views I hold that I know I can't think rationally about (e.g. that Ireland should be one nation, that people shouldn't be disadvantaged by the the accident of where they were born). These were formed before I'd done any reading into the matters and I approach rationally whether or not they are correct.

see more
2 points · 28 days ago · edited 28 days ago

Thanks for the honest reply, and sorry for being a bit nasty to you. My generalization was mostly from people I know who radicalized themselves online (both left and right). All comfortably middle class. It seems like they start from a mild version and then just keep reading stuff that confirms it.

Original Poster2 points · 27 days ago · edited 27 days ago

What exactly does 'radicalized' mean here? Does it mean, 'I shitpost a lot in /r/latestagecapitalism,' or does it mean, 'I have decided that property is theft and I will now squat in vacant houses and dumpster dive to survive'? Does it mean, 'I think cops are bad and I will now be Extremely Online and Angry about it,' or 'I have risen to the top of my local Black Lives Matter chapter and staged a protest that shutdown a local highway for several hours'? Cause the former is just being opinionated.

The kind of radicalization that gets people out in the streets is generally not driven by reading things online, anyway: https://www.psychologytoday.com/ca/articles/200307/what-makes-activist

Parental modeling can play a significant role in shaping future activists, according to Lauren Duncan, Ph.D., an assistant professor of psychology at Smith College who has studied activism. She found that students with a parent who fought in Vietnam were much more likely to protest against the 1991 Gulf War than those whose parents were not war veterans.

Individuals are more likely to feel a personal connection if they see themselves as part of the community affected by an issue, says Debra Mashek, Ph.D., a research fellow at George Mason University, who specializes in "moral" emotions. Millions of women embraced this sense of collective identity during the women's rights movement, for example.

see more
1 point · 27 days ago · edited 27 days ago

Elliot Rodger was personally affected by the problem of having no luck with women, but so were many other guys who didn't go on shooting sprees. So I think a big part of the reason is that he went looking for confirmation of his views, found it on sites like PUAHate, and that made him double down.

You're right that my comment missed an important step though. Radicalization comes from not just online echo chambers, but also real life echo chambers. But still, I wouldn't say it comes from life experience. What life experience could logically justify becoming a dumpster diving anarchist? It can only come from influence of books and friends. Which brings me back to the original point: whatever influenced my values so far might be untrustworthy, following such influence off a cliff is not what I truly want, and unwinding some of it is a good idea.

Load more comments

28 points · 29 days ago

Genius people don’t “bahaha” ... we “kekeke”

see more

I prefer hja hja hja

Suppose I have two lists of non-negative integers, and the length of each list is 2k (indexed from 0).

I want to sort of "multiply" the two lists in a certain way.

Let me give an example: A = [5,3,4,0] and B = [2,0,2,7], and I wanna put the results in C = [0,0,0,0]

I will take each element A[i] and B[j], then I want to add A[i]B[j] to C[i xor j].

So, C = [18, 34, 39, 41]

I could do it naively, but that would take quadratic time. I need something faster. I feel like this could be like some sort of FFT thing. Any ideas?

see more

Are you looking for XOR convolution and fast Hadamard transform?

Comment deleted29 days ago

I think this is the best you can do. The triangles cover sqrt(3)/2 of the square, so about 87%.

7 points · 1 month ago · edited 1 month ago

Taking a cue from HK Project?

I meant pass their bounding boxes.

see more

Not sure I understand your scenario 2 then. Let's say A is a long thin triangle positioned diagonally. Then its area is much smaller than its bounding box, but its triangulation consists of just one triangle which has the same problem.

Oh shit. Didn't think of that.

Thanks!

So what should I do instead to keep it bounded by O(k)?

see more
1 point · 1 month ago · edited 1 month ago

I don't think it can be done. Consider a thin triangle that stretches across your whole dataset, but whose area is only enough to include one point. There's no way you can do it in O(1).

It seems like the best you can do is a quadtree. Querying a quadtree with a triangle is pretty fast, because you can check triangle-rectangle intersection in O(1), and recurse deeper only if there's partial intersection (not fully in or fully out). My napkin says the complexity is O(k + sqrt(n) * d/D), where d is the diameter of the triangle and D is the diameter of the dataset.

Load more comments

2 points · 1 month ago

Ahh, sorry. You're right. ApplicativeDo + constrained monads will optimize this automatically, and that's great if it would work!

In hindsight, it was a bad example to express what I wanted to say. I'll show another example.

I came across the following code running very slow:

do x1 <- sampleSomething
   (score1, cost1) <- f x1 cost0
   x2 <- sampleSomething
   (score2, cost2) <- g x2 cost2
   x3 <- sampleSomething
   (score3, _) <- h x3 cost2
   return (score1 + score2 + score3)

Rewriting it to following form greatly reduced the computation time.

do (score1, cost1) <- normalize $
     do x <- sampleSomething
        f x cost0
   (score2, cost2) <- normalize $
     do x <- sampleSomething
        g x cost1
   (score3, _) <- normalize $
     do x <- sampleSomething
        h x cost2
   return (score1 + score2 + score3)

In this case, I knew both scoren and costn take very few possible values and have lots of duplication in results. If cost had very few duplications in results, this rewriting would slightly worsen the performance.

So constrained monad is not a panacea, that's what I wanted to say. But I stand corrected there will be many cases constrained monad (+ ApplicativeDo) will give us optimization without much work.

see more
1 point · 1 month ago · edited 1 month ago

Hmm, maybe I'm missing something. Even without optimization, with ApplicativeDo this code calls f, g and h twice each, which seems like the best possible result to me:

{-# LANGUAGE ApplicativeDo #-}

import Debug.Trace (trace)

s = [1,2]
f x = trace "f" [x+3,x+4]
g x = trace "g" [x+5,x+6]
h x = trace "h" [x+7,x+8]

main = print $ do
  x1 <- s
  s1 <- f x1
  x2 <- s
  s2 <- g x2
  x3 <- s
  s3 <- h x3
  return (s1+s2+s3)

And I guess with a constrained applicative all intermediate results would be normalized as well, because there's just no way to get a non-normalized state.

In my example, g depends on cost1 which is the output of f, and h depends on cost2 which is the output of g. Your example has no dependency between f, g, h.

see more
1 point · 1 month ago · edited 1 month ago

Oh! Sorry, I missed that. You're right. If the computation requires monadic bind, it will branch and never unbranch, and the stuff we discussed can't help with that. Adding normalize calls manually seems to be the only way :-(

I keep wishing for do-notation to work differently, as if each x <- ... preferred to avoid branching the whole future. But your example shows that it's pretty hard. It's not just a question of ApplicativeDo, the compiler would also need to normalize the intermediate result whenever a variable becomes unneeded, like x1 after the call to f.

Load more comments

5 points · 1 month ago · edited 1 month ago

Is there a word for science fiction that tries to play out a hypothetical scenario, but that scenario has already played out in real life and proves the fiction dead wrong? Because this is an example of such fiction. Someone should tell Cory about the word "sharashka" and what actually happens when a police state requires nerd labor. If you think they will let nerds have their happy monastery, boy you're in for a surprise.

Original Poster6 points · 1 month ago

Idk, the Silicion Valley nerd laborers being used to support our current police state seem comfortable enough. "The campus" is pretty blatantly the Googleplex.

see more
2 points · 1 month ago · edited 1 month ago

US police doesn't "disappear" people though. An actual police state would look like this.

The compiler already does what you want if you enable optimizations!

With O0 you get

f
g
h
h
g
h
h
[0,0,0,0,0,0,0,0]

with O1 however

f
g
h
[0,0,0,0,0,0,0,0]
see more
1 point · 1 month ago · edited 1 month ago

Wow! Thank you! I only ran this on repl.it and thought that's just the way things are. Haskell continues to pleasantly surprise :-)

I don't understand how h can be called only once, though...

1 point · 1 month ago · edited 1 month ago

I'm wondering if do-notation could work differently on containers:

import Debug.Trace (trace)

f () = trace "f" [1, 2]
g () = trace "g" [3, 4]
h x  = trace "h" [5, 6]

main = print $ do
  x <- f ()
  y <- g ()
  z <- h x
  return 0

Currently it calls f once, g twice, and h four times. With ApplicativeDo enabled, it calls f and g once, andh four times. (With a different desugaring choice, g and h could be called twice.)

I think it would be more intuitive and efficient if f and g were called once, and h was called as many times as there are elements in f (), which means twice. ApplicativeDo lets us go from (1,2,4) to either (1,1,4) or (1,2,2) but it seems to me that the compiler has enough information to achieve the optimal plan, leading to (1,1,2). Has anyone else had this idea?

Load more comments

25 points · 1 month ago

It's easy to get people to hate each other and do shitty things to one another, but how you go from that to industrial scale slaughter is still basically mysterious.

I'm tempted to say it requires the feeling that your group faces an existential threat from the victims of the slaughter, but I'm not sure that's correct. The various communist regimes systematically murdered tens of millions, and I'm not sure any of them were in the grip of such delusional paranoia, existential dread, and burning desire to avenge the humiliations of the past as were the Nazis were in Germany

Is one common thread that the people doing the butchering thought they were creating a 'new man' for a 'new world' that would be utopian?

It does seem that the grander the cause to which you have attached yourself, the greater the ability to rationalize systematically evil deeds.

see more

The common thread is just "us vs them".

I loved the Quest for Glory series growing up. I kickstarted this game and am elated that it has finally released...but holy cow I don't know if I can get over the art direction and character design. The main character especially just looks awful.

see more
3 points · 1 month ago · edited 1 month ago

I loved playing the QfG games as a child too, but they never had good art. The new one just has higher resolution, so the problems with artistic taste are more apparent.

The paper says the new algorithm is efficient, but doesn't seem to have any measurements.

u/want_to_want
Karma
11,608
Cake day
April 10, 2009
Trophy Case (1)
Nine-Year Club

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.