Jump to content

Wikipedia:Reference desk/Archives/Computing/2015 February 12

From Wikipedia, the free encyclopedia
Computing desk
< February 11 << Jan | February | Mar >> February 13 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 12

[edit]

Appearance, preview font

[edit]

How can I get out of this typewriter-font preview mode back to the previous one, which was much easier to read?

Also, why does the display/appearance of everything on WP look sketchy today? (Other websites look normal.)

Sca (talk) 13:55, 12 February 2015 (UTC)[reply]

WP looks normal to me today. Maybe clear your browser cache, and double check your preferences pane? SemanticMantis (talk) 15:11, 12 February 2015 (UTC)[reply]
What OS? --  Gadget850 talk 15:17, 12 February 2015 (UTC)[reply]
Windows Vista. Sca (talk)
Typically, that happens if the style sheet has not been loaded properly, and the web browser uses the default styles. If it is a transient error, a reload will often fix it. --Stephan Schulz (talk) 15:44, 12 February 2015 (UTC)[reply]
How would I reload the stylesheet? Sorry I'm not familiar with these matters. Sca (talk) 14:30, 13 February 2015 (UTC)[reply]
To reload the site style sheets, simply refresh the page. If you are still having problems, please tell us the original issue, what you have done and wht the issue is now. --  Gadget850 talk 14:45, 13 February 2015 (UTC)[reply]
Probably due to the above. -- [[User:Edokter]] {{talk}} 16:06, 12 February 2015 (UTC)[reply]
When fonts start going funny on you, a reboot is the first thing to try. StuRat (talk) 17:10, 12 February 2015 (UTC)[reply]

 Done — Preview font still looks weird (for one thing, dashes & hyphs aren't visible). Sca (talk) 17:11, 12 February 2015 (UTC)[reply]

Gadget850: To recap, yesterday (2/12) I loaded a bunch of Windows Vista updates. (I loaded another one this a.m.) When I went to WP, I found that typography didn't look normal, and more important, that in edit mode the preview font had changed to a faint, sketchy typewriter-style font – in which hyphs & dashes aren't even visible.
I did fiddle around in preferences-appearance, but soon went back to MonoBook, the format I prefer. Since then I've refreshed and reloaded pages multiple times, and of course shut down & restarted the computer several times. I'm still having the same problem with the preview font, which is very annoying – although otherwise the appearance of the site is somewhat different but OK.
I'm not at all sure the change is due to the updates, because they dealt, as far as I can tell, solely with security issues. Sca (talk) 15:40, 13 February 2015 (UTC)[reply]
Update KB3013455 has a known issue with font corruption. Uninstall it. --  Gadget850 talk 15:46, 13 February 2015 (UTC)[reply]
Well, KB3013455 (titled Security Update for Windows Vista) is listed on Update History for 2/12 (along with a bunch of others), but when I go to Uninstall or Change a Program (under Programs and Features on the Control Panel), it's not listed. All that's listed there as having been installed on 2/12 is Compatibility Pack for the 2007 Office system. I'm somewhat nervous about uninstalling the whole thing, but I'm not a techie and don't really understand these things. Sca (talk) 16:47, 13 February 2015 (UTC)[reply]
PS: Would changing the browser make any diff? (Using Firefox.) Sca (talk) 18:23, 13 February 2015 (UTC)[reply]
Start → Control Panel → Programs → View Installed Updates → select KB3013455 → Uninstall
--  Gadget850 talk 18:33, 13 February 2015 (UTC)[reply]
OK, I think I'm finally gettin' it.... Sca (talk) 22:13, 13 February 2015 (UTC)[reply]
 Done — Thanks much for your help. Sca (talk) 22:27, 13 February 2015 (UTC)[reply]
PS: Gadget850, after uninstalling KB3013455 ystdy, MS told me this morning (2/14) they had another update, so I installed it and – Bingo! – same prob. It turned out to be KB3013455 again, and had to uninstall it again. If the problem is "known," why don't they FIX the %#@!&*># thing? Sca (talk) 15:58, 14 February 2015 (UTC)[reply]
It is going to keep showing up. Right click on it and select 'hide'. --  Gadget850 talk 16:09, 14 February 2015 (UTC)[reply]
http://support.microsoft.com/kb/3013455 says: "Microsoft is researching this problem and will post more information in this article when the information becomes available." PrimeHunter (talk) 16:42, 14 February 2015 (UTC)[reply]
Yup, surfaced from the Black Lagoon again today. Sca (talk) 15:00, 15 February 2015 (UTC)[reply]


How to uninstall the single update KB3013455

[edit]
  • Start->Control Panel->Programs and Features->View Installed Updates
  • Give PC time to load them all
  • Then..Search for KB3013455 (search box is top right of window).
  • Then..Right-click then select Uninstall
  • This requires restart of your system (will ask automatically)
-- Moxy (talk) 18:56, 14 February 2015 (UTC)[reply]
Users can also try to change fonts instead of uninstalling this security update. The problem may affect other programs and other websites but a quick Wikipedia fix which is acceptable (not great) for me until Microsoft hopefully fixes it: Choose "Sans-serif font" at Special:Preferences#mw-prefsection-editing, "Edit area font style". PrimeHunter (talk) 01:35, 15 February 2015 (UTC)[reply]

Why are C arrays so vulnerable?

[edit]

With something like Code::Blocks, the path to a runtime error from memory corruption seems incredibly easy:

int a[5]; a[5] = 2;

I understand that the lack of checking speeds things up, but I feel like this must compromise overall security. I mean (purely hypothetical example), if you download some pristine freeware program to do weather prediction or genome analysis and it accepts up to 100 sets of custom parameters from some data file it uses (never expecting more than 2 or 3 to really happen, so never bothering to test the extreme), and the wrong person sends you a data file with 100,000 sets that your freeware dutifully embeds into its array, I'd think you'd end up executing malicious code. How often does that happen, anyway? Wnt (talk) 21:30, 12 February 2015 (UTC)[reply]

It's a design decision made around 1970. Given that C is still one of the most used languages after 45 years, it's hard to say that this was a wrong decision. It fits C's niche as a portable assembler with quite decent abstractions. Yes, it's easy to write buggy code in C, and yes, you can eliminate some classes of errors by doing more checking in the language. But on the other hand, C has very simple syntax and semantics, good efficiency, and a lot of flexibility. People seem to like the overall deal they get... --Stephan Schulz (talk) 21:42, 12 February 2015 (UTC)[reply]
Good code will have Secure input and output handling, regardless of what language it is written in. Vespine (talk) 22:02, 12 February 2015 (UTC)[reply]
C doesn't really have arrays; it has syntax with square brackets that looks like arrays, but is just syntatic sugar for pointer arithmetic. C doesn't, for the most part, know the size of the "array" a pointer points to (and the pointer may not refer to the start of that "array" anyway), so the compiler can only do so much - some modern compilers try to draw some kind of inference, but they're scuppered with things like:
char * cp = malloc(sizeof(char)*10);
cp[12] = 'f';
At the risk of reigniting a long dead language war, Pascal did have real arrays (which means the length was stored and available at run time, for arrays and "strings"ahem); most practical Pascal implementations had switchable (at compile time) bounds checking on arrays. Stephan's point is the central one - C's design was appropriate for for tiny, puny 1970s machines; perhaps Pascal's was a better decision for the 1980s, and we'd probably have had a somewhat securer 1990s had Pascal rather than C won the systems programming mindshare. Pascal variants like Turbo Pascal, which allowed optional relaxation of Pascal's sometimes tiresome type system and allowed things like casting packed array of char to a record (or vice versa) meant we could do pretty much the same memory-munging as we could in C, but with (optionally) safe arrays, real enums, and a type system that's more than decoration. But C did survive and Pascal mostly didn't, and C is still surviving, still embodying a bunch of design decisions that don't make sense 40-odd years later. Now we'd probably be better off if current system programmers moved to modern languages aimed at that space (perhaps Rust or Go), but most won't. Like a lot of technological standards, like railway gauges and electrical sockets, we're often stuck with designs that aren't what we'd choose now, but which are too expensive for our society to change. -- Finlay McWalterTalk 23:32, 12 February 2015 (UTC)[reply]
Let me just make it clear that according to the C standard, an object declared as int a[5]; is called an array, even if Finlay is right that it doesn't have the properties that he feels a "real" array should have. And its size is available at compile time as sizeof a if you want it in bytes, and as sizeof a/sizeof(int) or sizeof a/sizeof a[0] if you want the number of elements. What you can't do in C is create an array whose size is determined at run time; you can only get a block of memory (using malloc) and access it (using a pointer) as if it was an array. (Which likewise means you can't use sizeof to find out how big it is.) Also, you can't pass an array by value to a function; you pass a pointer to the first element. (Which again means you can't use sizeof inside the function to find out how big the array is.)
People sometimes get the idea that arrays and pointers are the same thing in C; they aren't. (I know, Finlay didn't say they are.) This mainly comes about because C uses the same syntax q[i] to access an array elements no matter whether q is the array itself or a pointer to its first element. Also, similarly, an array-like syntax can be used to declare pointers. This is intended as a convenience for the programmer; it does not mean that an array type is not a type of its own, distinct from pointers.
I'm writing this as a programmer of traditional C as standardized around 1990; I'm less familiar with the changes made in the 1999 standard and any later ones, and it's possible in particular that the passing of function parameters has changed in ways I don't remember, and there are alternatives to malloc. But my basic points about terminology and sizeof haven't changed. --70.49.169.244 (talk) 09:11, 13 February 2015 (UTC)[reply]
It's incredibly common. Buffer overflow has a good overview of the problem, it's history and a variety of counter-measures, though not any assessment of the scale of the problem in deployed software. The Common Weakness Enumeration had a Top 25 Most Dangerous Software Errors which they unfortunately stopped publishing in 2011; it ranks "Buffer copy without checking size of input" as the third most dangerous and "Incorrect calculation of buffer size" as twentieth most dangerous; both are variants of the bug you describe. Heartbleed, one of the mega-bugs of 2014, is also, when you get down to it, a failure to check buffer sizes before copying memory; the same bug.
Note that the speed argument is not all one-sided; for instance, programs using Fortran-style strings (which are not null-terminated and store their length as well as the string content) are often much faster than the equivalent using C strings, because they avoid a lot of scanning for the null terminator. Joel Spolsky has written eloquently on this subject. GoldenRing (talk) 23:40, 12 February 2015 (UTC)[reply]
To decrease negative effects of off-by-one errors and the like as in int a[5]; a[5] = 2;, I often declare C arrays one larger than needed. A compiler would also be free to automatically reserve space for one more element than declared in suitable conditions (like elements which aren't huge and arrays which aren't allocated a large number of times), but I don't know whether any do. PrimeHunter (talk) 23:47, 12 February 2015 (UTC)[reply]
Malloc Guard does extra allocation beyond the bounds of an allocated object or array, but it works at the granularity of pages. The idea is to catch errors during specialized debug builds, not to encourage programmers to get sloppy or make common off-by-one array out-of-bounds errors. Nimur (talk) 01:00, 13 February 2015 (UTC)[reply]
Thanks for a lot of good responses. I'm still wondering, though: why is this kind of error mostly associated with data buffers, which seem like a specialized function, when there are so many just plain arrays that are used carelessly in a much wider range of programs that don't deal with streams of data? And why is this response treated as "locked in" to the C family of languages when I'd think that a compiler could rather simply be reprogrammed to look for off-by-one(or more) errors, to issue warnings about all statements that potentially could exceed array bounds due to arithmetic, and even to program in a check that the index is within bounds that happens before any element is written? Wnt (talk) 00:30, 13 February 2015 (UTC)[reply]
Array overflow is a common error in many situations but security problems get the most public attention and data buffers are common there, because an attacker can often supply the data and control which data (such as executable code) is written in a wrong place. Also, most bugs are discovered before a program is released but one type of bug which may go unnoticed is poor reaction to data which is much larger than expected for a given purpose, for example a field supposed to only contain a date, color code, money amount, file name, etc. PrimeHunter (talk) 01:01, 13 February 2015 (UTC)[reply]
The C standard is designed to allow bounds-checked implementations. They seem to be rare to nonexistent in the wild. I imagine this is because safe pointers are larger than unsafe pointers, which means the ABI is incompatible, which means you either have to recompile everything (even the operating system) or else write complex ABI translation wrappers. (And recompilation would only work if none of the code makes assumptions that are incompatible with bounds checking, such as sizeof(void*) == sizeof(size_t), and a lot of code, especially OS code, does make such assumptions.)
The difficulty with warning about possible buffer overflows is that it's generally very hard to prove that any pointer dereference is in bounds, so you will likely get an unmanageably large number of false positives. There are formal verification tools that try to prove the absence of buffer overflows, but I think they need a lot of human assistance. There are also tools like Valgrind and Purify that hack runtime bounds checking into C code post-compilation. Intel MPX seems to be an attempt to allow compilers to add bounds checking without breaking the ABI. -- BenRG (talk) 01:06, 13 February 2015 (UTC)[reply]
I am curious about the first part of your statement, that the C standard is designed to allow bounds-checked implementations. Can you provide a reference describing the provisions in the standard? GoldenRing (talk) 04:36, 13 February 2015 (UTC)[reply]
In the 1999 standard the most important rule is part of section 6.5.6, Additive Operators. This describes the effect of taking a pointer and adding or subtracting an integer: it reads in part, "...if the expression P points to the i-th element of an array object, the expressions (P)+N ... and (P)-N (where N has the value n) point to, respectively, the i+n-th and in-th elements of the array, provided they exist... If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined." Section 6.5.2.1, Array Subscripting, defines [...] in such a way that the same rules apply to P[N].
Note the phrase "the behavior is undefined". This is defined in section 3.4.3 as meaning that the standard imposes no requirements on an implementation if the situation occurs. In other words, the implementation is within its rights to not even test for whether the situation has occurred and to produce the wild behavior we typically see in cases of buffer overflow. But it is also within its rights to maintain the necessary information to test for the behavior, and trap it as an error—in other words, to do bounds-checking.
You may wonder how bounds checking can be permissible if 6.5.6 allows you to compute a pointer one past the end of an array. The answer is that there's additional wording that I did not quote, which says that you can compute the pointer but you can't use it to access that position (again, if you do, the behavior is undefined). This allows you to write code like int *p; for (p = &a[0]; p < &a[n]; ++p), where n is the size of the array. Then *p will access each element of the array in turn. Here &a[n] is a one-past-the-end pointer, but you aren't ever accessing a[n] itself, which would be an error, you're just taking its address. (By the way, if you have two pointers that don't point to elements of the same array or larger structure, it's also undefined behavior if you simply compare them with something like if (p1 < p2). You are allowed to compare them for equality, though.)
--70.49.169.244 (talk) 09:43, 13 February 2015 (UTC), minor edit later.[reply]
Ta. I guess I so used to thinking of undefined behaviour as something to be avoided at all costs, I had never thought of it as an opportunity for bounds-checking array accesses. GoldenRing (talk) 10:28, 13 February 2015 (UTC)[reply]
C++ is largely outside the scope of this question, but it is worth mentioning that C++11 has std::array with the same semantics and efficiency as a C array, but the option of addressing it with either unchecked arr[x] syntax or bounds-checked arr.at(x) syntax, and functions to iterate over it and get its size.-gadfium 21:16, 13 February 2015 (UTC)[reply]
There's also the compile-time checked std::get<2>(arr) access for C++11 arrays.-gadfium 21:26, 13 February 2015 (UTC)[reply]