sort by:
Revision Author Date Message Commit Date
3dca6f3 Stamp 9.1.20. 08 February 2016, 21:21:40 UTC
862b4a4 Translation updates Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: bbedbfae7586389e1f43b8116d76af3ac528c211 08 February 2016, 19:50:20 UTC
730c89b Last-minute updates for release notes. Security: CVE-2016-0773 08 February 2016, 15:49:38 UTC
98d6b73 Fix some regex issues with out-of-range characters and large char ranges. Previously, our regex code defined CHR_MAX as 0xfffffffe, which is a bad choice because it is outside the range of type "celt" (int32). Characters approaching that limit could lead to infinite loops in logic such as "for (c = a; c <= b; c++)" where c is of type celt but the range bounds are chr. Such loops will work safely only if CHR_MAX+1 is representable in celt, since c must advance to beyond b before the loop will exit. Fortunately, there seems no reason not to restrict CHR_MAX to 0x7ffffffe. It's highly unlikely that Unicode will ever assign codes that high, and none of our other backend encodings need characters beyond that either. In addition to modifying the macro, we have to explicitly enforce character range restrictions on the values of \u, \U, and \x escape sequences, else the limit is trivially bypassed. Also, the code for expanding case-independent character ranges in bracket expressions had a potential integer overflow in its calculation of the number of characters it could generate, which could lead to allocating too small a character vector and then overwriting memory. An attacker with the ability to supply arbitrary regex patterns could easily cause transient DOS via server crashes, and the possibility for privilege escalation has not been ruled out. Quite aside from the integer-overflow problem, the range expansion code was unnecessarily inefficient in that it always produced a result consisting of individual characters, abandoning the knowledge that we had a range to start with. If the input range is large, this requires excessive memory. Change it so that the original range is reported as-is, and then we add on any case-equivalent characters that are outside that range. With this approach, we can bound the number of individual characters allowed without sacrificing much. This patch allows at most 100000 individual characters, which I believe to be more than the number of case pairs existing in Unicode, so that the restriction will never be hit in practice. It's still possible for range() to take awhile given a large character code range, so also add statement-cancel detection to its loop. The downstream function dovec() also lacked cancel detection, and could take a long time given a large output from range(). Per fuzz testing by Greg Stark. Back-patch to all supported branches. Security: CVE-2016-0773 08 February 2016, 15:25:40 UTC
f6c7bfb Improve documentation about PRIMARY KEY constraints. Get rid of the false implication that PRIMARY KEY is exactly equivalent to UNIQUE + NOT NULL. That was more-or-less true at one time in our implementation, but the standard doesn't say that, and we've grown various features (many of them required by spec) that treat a pkey differently from less-formal constraints. Per recent discussion on pgsql-general. I failed to resist the temptation to do some other wordsmithing in the same area. 07 February 2016, 21:02:44 UTC
2d59325 Release notes for 9.5.1, 9.4.6, 9.3.11, 9.2.15, 9.1.20. 07 February 2016, 19:16:32 UTC
b1f591c Force certain "pljava" custom GUCs to be PGC_SUSET. Future PL/Java versions will close CVE-2016-0766 by making these GUCs PGC_SUSET. This PostgreSQL change independently mitigates that PL/Java vulnerability, helping sites that update PostgreSQL more frequently than PL/Java. Back-patch to 9.1 (all supported versions). 06 February 2016, 01:23:19 UTC
6887d72 Update time zone data files to tzdata release 2016a. DST law changes in Cayman Islands, Metlakatla, Trans-Baikal Territory (Zabaykalsky Krai). Historical corrections for Pakistan. 05 February 2016, 15:59:39 UTC
9c70463 In pg_dump, ensure that view triggers are processed after view rules. If a view is split into CREATE TABLE + CREATE RULE to break a circular dependency, then any triggers on the view must be dumped/reloaded after the CREATE RULE; else the backend may reject the CREATE TRIGGER because it's the wrong type of trigger for a plain table. This works all right in plain dump/restore because of pg_dump's sorting heuristic that places triggers after rules. However, when using parallel restore, the ordering must be enforced by a dependency --- and we didn't have one. Fixing this is a mere matter of adding an addObjectDependency() call, except that we need to be able to find all the triggers belonging to the view relation, and there was no easy way to do that. Add fields to pg_dump's TableInfo struct to remember where the associated TriggerInfo struct(s) are. Per bug report from Dennis Kögel. The failure can be exhibited at least as far back as 9.1, so back-patch to all supported branches. 04 February 2016, 05:26:10 UTC
4c8b07d pgbench: Install guard against overflow when dividing by -1. Commit 64f5edca2401f6c2f23564da9dd52e92d08b3a20 fixed the same hazard on master; this is a backport, but the modulo operator does not exist in older releases. Michael Paquier 03 February 2016, 14:25:34 UTC
79782b4 Make sure ecpg header files do not have a comment lasting several lines, one of which is a preprocessor directive. This leads ecpg to incorrectly parse the comment as nested. 01 February 2016, 12:19:43 UTC
d9c76aa Fix error in documentated use of mingw-w64 compilers Error reported by Igal Sapir. 31 January 2016, 00:32:19 UTC
ed5f572 Fix incorrect pattern-match processing in psql's \det command. listForeignTables' invocation of processSQLNamePattern did not match up with the other ones that handle potentially-schema-qualified names; it failed to make use of pg_table_is_visible() and also passed the name arguments in the wrong order. Bug seems to have been aboriginal in commit 0d692a0dc9f0e532. It accidentally sort of worked as long as you didn't inquire too closely into the behavior, although the silliness was later exposed by inconsistencies in the test queries added by 59efda3e50ca4de6 (which I probably should have questioned at the time, but didn't). Per bug #13899 from Reece Hart. Patch by Reece Hart and Tom Lane. Back-patch to all affected branches. 29 January 2016, 09:28:03 UTC
b043df0 Fix startup so that log prefix %h works for the log_connections message. We entirely randomly chose to initialize port->remote_host just after printing the log_connections message, when we could perfectly well do it just before, allowing %h and %r to work for that message. Per gripe from Artem Tomyuk. 26 January 2016, 20:38:33 UTC
b1bc381 Properly install dynloader.h on MSVC builds This will enable PL/Java to be cleanly compiled, as dynloader.h is a requirement. Report by Chapman Flack Patch by Michael Paquier Backpatch through 9.1 20 January 2016, 04:30:28 UTC
161a767 Fix spelling mistake. Same patch submitted independently by David Rowley and Peter Geoghegan. 15 January 2016, 04:16:35 UTC
b1c0f92 Properly close token in sspi authentication We can never leak more than one token, but we shouldn't do that. We don't bother closing it in the error paths since the process will exit shortly anyway. Christian Ullrich 14 January 2016, 12:08:10 UTC
5108013 Handle extension members when first setting object dump flags in pg_dump. pg_dump's original approach to handling extension member objects was to run around and clear (or set) their dump flags rather late in its data collection process. Unfortunately, quite a lot of code expects those flags to be valid before that; which was an entirely reasonable expectation before we added extensions. In particular, this explains Karsten Hilbert's recent report of pg_upgrade failing on a database in which an extension has been installed into the pg_catalog schema. Its objects are initially marked as not-to-be-dumped on the strength of their schema, and later we change them to must-dump because we're doing a binary upgrade of their extension; but we've already skipped essential tasks like making associated DO_SHELL_TYPE objects. To fix, collect extension membership data first, and incorporate it in the initial setting of the dump flags, so that those are once again correct from the get-go. This has the undesirable side effect of slightly lengthening the time taken before pg_dump acquires table locks, but testing suggests that the increase in that window is not very much. Along the way, get rid of ugly special-case logic for deciding whether to dump procedural languages, FDWs, and foreign servers; dump decisions for those are now correct up-front, too. In 9.3 and up, this also fixes erroneous logic about when to dump event triggers (basically, they were *always* dumped before). In 9.5 and up, transform objects had that problem too. Since this problem came in with extensions, back-patch to all supported versions. 13 January 2016, 23:55:27 UTC
405635a Clean up some lack-of-STRICT issues in the core code, too. A scan for missed proisstrict markings in the core code turned up these functions: brin_summarize_new_values pg_stat_reset_single_table_counters pg_stat_reset_single_function_counters pg_create_logical_replication_slot pg_create_physical_replication_slot pg_drop_replication_slot The first three of these take OID, so a null argument will normally look like a zero to them, resulting in "ERROR: could not open relation with OID 0" for brin_summarize_new_values, and no action for the pg_stat_reset_XXX functions. The other three will dump core on a null argument, though this is mitigated by the fact that they won't do so until after checking that the caller is superuser or has rolreplication privilege. In addition, the pg_logical_slot_get/peek[_binary]_changes family was intentionally marked nonstrict, but failed to make nullness checks on all the arguments; so again a null-pointer-dereference crash is possible but only for superusers and rolreplication users. Add the missing ARGISNULL checks to the latter functions, and mark the former functions as strict in pg_proc. Make that change in the back branches too, even though we can't force initdb there, just so that installations initdb'd in future won't have the issue. Since none of these bugs rise to the level of security issues (and indeed the pg_stat_reset_XXX functions hardly misbehave at all), it seems sufficient to do this. In addition, fix some order-of-operations oddities in the slot_get_changes family, mostly cosmetic, but not the part that moves the function's last few operations into the PG_TRY block. As it stood, there was significant risk for an error to exit without clearing historical information from the system caches. The slot_get_changes bugs go back to 9.4 where that code was introduced. Back-patch appropriate subsets of the pg_proc changes into all active branches, as well. 09 January 2016, 21:58:33 UTC
fe25785 Clean up code for widget_in() and widget_out(). Given syntactically wrong input, widget_in() could call atof() with an indeterminate pointer argument, typically leading to a crash; or if it didn't do that, it might return a NULL pointer, which again would lead to a crash since old-style C functions aren't supposed to do things that way. Fix that by correcting the off-by-one syntax test and throwing a proper error rather than just returning NULL. Also, since widget_in and widget_out have been marked STRICT for a long time, their tests for null inputs are just dead code; remove 'em. In the oldest branches, also improve widget_out to use snprintf not sprintf, just to be sure. In passing, get rid of a long-since-useless sprintf into a local buffer that nothing further is done with, and make some other minor coding style cleanups. In the intended regression-testing usage of these functions, none of this is very significant; but if the regression test database were left around in a production installation, these bugs could amount to a minor security hazard. Piotr Stefaniak, Michael Paquier, and Tom Lane 09 January 2016, 18:44:27 UTC
e8808f3 Add STRICT to some C functions created by the regression tests. These functions readily crash when passed a NULL input value. The tests themselves do not pass NULL values to them; but when the regression database is used as a basis for fuzz testing, they cause a lot of noise. Also, if someone were to leave a regression database lying about in a production installation, these would create a minor security hazard. Andreas Seltenreich 09 January 2016, 18:03:27 UTC
b05a347 Fix unobvious interaction between -X switch and subdirectory creation. Turns out the only reason initdb -X worked is that pg_mkdir_p won't whine if you point it at something that's a symlink to a directory. Otherwise, the attempt to create pg_xlog/ just like all the other subdirectories would have failed. Let's be a little more explicit about what's happening. Oversight in my patch for bug #13853 (mea culpa for not testing -X ...) 07 January 2016, 23:20:58 UTC
099541e Use plain mkdir() not pg_mkdir_p() to create subdirectories of PGDATA. When we're creating subdirectories of PGDATA during initdb, we know darn well that the parent directory exists (or should exist) and that the new subdirectory doesn't (or shouldn't). There is therefore no need to use anything more complicated than mkdir(). Using pg_mkdir_p() just opens us up to unexpected failure modes, such as the one exhibited in bug #13853 from Nuri Boardman. It's not very clear why pg_mkdir_p() went wrong there, but it is clear that we didn't need to be trying to create parent directories in the first place. We're not even saving any code, as proven by the fact that this patch nets out at minus five lines. Since this is a response to a field bug report, back-patch to all branches. 07 January 2016, 20:22:01 UTC
b96f6f4 Windows: Make pg_ctl reliably detect service status pg_ctl is using isatty() to verify whether the process is running in a terminal, and if not it sends its output to Windows' Event Log ... which does the wrong thing when the output has been redirected to a pipe, as reported in bug #13592. To fix, make pg_ctl use the code we already have to detect service-ness: in the master branch, move src/backend/port/win32/security.c to src/port (with suitable tweaks so that it runs properly in backend and frontend environments); pg_ctl already has access to pgport so it Just Works. In older branches, that's likely to cause trouble, so instead duplicate the required code in pg_ctl.c. Author: Michael Paquier Bug report and diagnosis: Egon Kocjan Backpatch: all supported branches 07 January 2016, 14:59:08 UTC
d05103b Fix treatment of *lpNumberOfBytesRecvd == 0: that's a completion condition. pgwin32_recv() has treated a non-error return of zero bytes from WSARecv() as being a reason to block ever since the current implementation was introduced in commit a4c40f140d23cefb. However, so far as one can tell from Microsoft's documentation, that is just wrong: what it means is graceful connection closure (in stream protocols) or receipt of a zero-length message (in message protocols), and neither case should result in blocking here. The only reason the code worked at all was that control then fell into the retry loop, which did *not* treat zero bytes specially, so we'd get out after only wasting some cycles. But as of 9.5 we do not normally reach the retry loop and so the bug is exposed, as reported by Shay Rojansky and diagnosed by Andres Freund. Remove the unnecessary test on the byte count, and rearrange the code in the retry loop so that it looks identical to the initial sequence. Back-patch of commit 90e61df8130dc7051a108ada1219fb0680cb3eb6. The original plan was to apply this only to 9.5 and up, but after discussion and buildfarm testing, it seems better to back-patch. The noblock code path has been at risk of this problem since it was introduced (in 9.0); if it did happen in pre-9.5 branches, the symptom would be that a walsender would wait indefinitely rather than noticing a loss of connection. While we lack proof that the case has been seen in the field, it seems possible that it's happened without being reported. 04 January 2016, 22:41:33 UTC
e4959fb Teach pg_dump to quote reloption values safely. Commit c7e27becd2e6eb93 fixed this on the backend side, but we neglected the fact that several code paths in pg_dump were printing reloptions values that had not gotten massaged by ruleutils. Apply essentially the same quoting logic in those places, too. 03 January 2016, 00:04:45 UTC
aa078a9 Adjust back-branch release note description of commits a2a718b22 et al. As pointed out by Michael Paquier, recovery_min_apply_delay didn't exist in 9.0-9.3, making the release note text not very useful. Instead make it talk about recovery_target_xid, which did exist then. 9.0 is already out of support, but we can fix the text in the newer branches' copies of its release notes. 02 January 2016, 20:29:03 UTC
2da136d Update copyright for 2016 Backpatch certain files through 9.1 02 January 2016, 18:33:39 UTC
85dbc46 Teach flatten_reloptions() to quote option values safely. flatten_reloptions() supposed that it didn't really need to do anything beyond inserting commas between reloption array elements. However, in principle the value of a reloption could be nearly anything, since the grammar allows a quoted string there. Any restrictions on it would come from validity checking appropriate to the particular option, if any. A reloption value that isn't a simple identifier or number could thus lead to dump/reload failures due to syntax errors in CREATE statements issued by pg_dump. We've gotten away with not worrying about this so far with the core-supported reloptions, but extensions might allow reloption values that cause trouble, as in bug #13840 from Kouhei Sutou. To fix, split the reloption array elements explicitly, and then convert any value that doesn't look like a safe identifier to a string literal. (The details of the quoting rule could be debated, but this way is safe and requires little code.) While we're at it, also quote reloption names if they're not safe identifiers; that may not be a likely problem in the field, but we might as well try to be bulletproof here. It's been like this for a long time, so back-patch to all supported branches. Kouhei Sutou, adjusted some by me 01 January 2016, 20:27:53 UTC
60f8cc9 Add some more defenses against silly estimates to gincostestimate(). A report from Andy Colson showed that gincostestimate() was not being nearly paranoid enough about whether to believe the statistics it finds in the index metapage. The problem is that the metapage stats (other than the pending-pages count) are only updated by VACUUM, and in the worst case could still reflect the index's original empty state even when it has grown to many entries. We attempted to deal with that by scaling up the stats to match the current index size, but if nEntries is zero then scaling it up still gives zero. Moreover, the proportion of pages that are entry pages vs. data pages vs. pending pages is unlikely to be estimated very well by scaling if the index is now orders of magnitude larger than before. We can improve matters by expanding the use of the rule-of-thumb estimates I introduced in commit 7fb008c5ee59b040: if the index has grown by more than a cutoff amount (here set at 4X growth) since VACUUM, then use the rule-of-thumb numbers instead of scaling. This might not be exactly right but it seems much less likely to produce insane estimates. I also improved both the scaling estimate and the rule-of-thumb estimate to account for numPendingPages, since it's reasonable to expect that that is accurate in any case, and certainly pages that are in the pending list are not either entry or data pages. As a somewhat separate issue, adjust the estimation equations that are concerned with extra fetches for partial-match searches. These equations suppose that a fraction partialEntries / numEntries of the entry and data pages will be visited as a consequence of a partial-match search. Now, it's physically impossible for that fraction to exceed one, but our estimate of partialEntries is mostly bunk, and our estimate of numEntries isn't exactly gospel either, so we could arrive at a silly value. In the example presented by Andy we were coming out with a value of 100, leading to insane cost estimates. Clamp the fraction to one to avoid that. Like the previous patch, back-patch to all supported branches; this problem can be demonstrated in one form or another in all of them. 01 January 2016, 18:42:48 UTC
4388895 Document the exponentiation operator as associating left to right. Common mathematical convention is that exponentiation associates right to left. We aren't going to change the parser for this, but we could note it in the operator's description. (It's already noted in the operator precedence/associativity table, but users might not look there.) Per bug #13829 from Henrik Pauli. 28 December 2015, 17:09:40 UTC
1b6102e Add forgotten CHECK_FOR_INTERRUPT calls in pgcrypto's crypt() Both Blowfish and DES implementations of crypt() can take arbitrarily long time, depending on the number of rounds specified by the caller; make sure they can be interrupted. Author: Andreas Karlsson Reviewer: Jeff Janes Backpatch to 9.1. 27 December 2015, 16:03:19 UTC
7e29e7f Rework internals of changing a type's ownership This is necessary so that REASSIGN OWNED does the right thing with composite types, to wit, that it also alters ownership of the type's pg_class entry -- previously, the pg_class entry remained owned by the original user, which caused later other failures such as the new owner's inability to use ALTER TYPE to rename an attribute of the affected composite. Also, if the original owner is later dropped, the pg_class entry becomes owned by a non-existant user which is bogus. To fix, create a new routine AlterTypeOwner_oid which knows whether to pass the request to ATExecChangeOwner or deal with it directly, and use that in shdepReassignOwner rather than calling AlterTypeOwnerInternal directly. AlterTypeOwnerInternal is now simpler in that it only modifies the pg_type entry and recurses to handle a possible array type; higher-level tasks are handled by either AlterTypeOwner directly or AlterTypeOwner_oid. I took the opportunity to add a few more objects to the test rig for REASSIGN OWNED, so that more cases are exercised. Additional ones could be added for superuser-only-ownable objects (such as FDWs and event triggers) but I didn't want to push my luck by adding a new superuser to the tests on a backpatchable bug fix. Per bug #13666 reported by Chris Pacejo. This is a backpatch of commit 756e7b4c9db1 to branches 9.1 -- 9.4. 21 December 2015, 22:49:15 UTC
ab14c13 adjust ACL owners for REASSIGN and ALTER OWNER TO When REASSIGN and ALTER OWNER TO are used, both the object owner and ACL list should be changed from the old owner to the new owner. This patch fixes types, foreign data wrappers, and foreign servers to change their ACL list properly; they already changed owners properly. Report by Alexey Bashtanov This is a backpatch of commit 59367fdf97c (for bug #9923) by Bruce Momjian to branches 9.1 - 9.4; it wasn't backpatched originally out of concerns that it would create a backwards compatibility problem, but per discussion related to bug #13666 that turns out to have been misguided. (Therefore, the entry in the 9.5 release notes should be removed.) Note that 9.1 didn't have privileges on types (which were introduced by commit 729205571e81), so this commit only changes foreign-data related objects in that branch. Discussion: http://www.postgresql.org/message-id/20151216224004.GL2618@alvherre.pgsql http://www.postgresql.org/message-id/10227.1450373793@sss.pgh.pa.us 21 December 2015, 22:16:15 UTC
6270ec1 Remove silly completion for "DELETE FROM tabname ...". psql offered USING, WHERE, and SET in this context, but SET is not a valid possibility here. Seems to have been a thinko in commit f5ab0a14ea83eb6c which added DELETE's USING option. 20 December 2015, 23:29:52 UTC
db462a4 Fix improper initialization order for readline. Turns out we must set rl_basic_word_break_characters *before* we call rl_initialize() the first time, because it will quietly copy that value elsewhere --- but only on the first call. (Love these undocumented dependencies.) I broke this yesterday in commit 2ec477dc8108339d; like that commit, back-patch to all active branches. Per report from Pavel Stehule. 17 December 2015, 21:55:51 UTC
03b138e Cope with Readline's failure to track SIGWINCH events outside of input. It emerges that libreadline doesn't notice terminal window size change events unless they occur while collecting input. This is easy to stumble over if you resize the window while using a pager to look at query output, but it can be demonstrated without any pager involvement. The symptom is that queries exceeding one line are misdisplayed during subsequent input cycles, because libreadline has the wrong idea of the screen dimensions. The safest, simplest way to fix this is to call rl_reset_screen_size() just before calling readline(). That causes an extra ioctl(TIOCGWINSZ) for every command; but since it only happens when reading from a tty, the performance impact should be negligible. A more valid objection is that this still leaves a tiny window during entry to readline() wherein delivery of SIGWINCH will be missed; but the practical consequences of that are probably negligible. In any case, there doesn't seem to be any good way to avoid the race, since readline exposes no functions that seem safe to call from a generic signal handler --- rl_reset_screen_size() certainly isn't. It turns out that we also need an explicit rl_initialize() call, else rl_reset_screen_size() dumps core when called before the first readline() call. rl_reset_screen_size() is not present in old versions of libreadline, so we need a configure test for that. (rl_initialize() is present at least back to readline 4.0, so we won't bother with a test for it.) We would need a configure test anyway since libedit's emulation of libreadline doesn't currently include such a function. Fortunately, libedit seems not to have any corresponding bug. Merlin Moncure, adjusted a bit by me 16 December 2015, 21:58:56 UTC
c54bc78 Add missing CHECK_FOR_INTERRUPTS in lseg_inside_poly Apparently, there are bugs in this code that cause it to loop endlessly. That bug still needs more research, but in the meantime it's clear that the loop is missing a check for interrupts so that it can be cancelled timely. Backpatch to 9.1 -- this has been missing since 49475aab8d0d. 14 December 2015, 19:44:40 UTC
4b58ded Fix out-of-memory error handling in ParameterDescription message processing. If libpq ran out of memory while constructing the result set, it would hang, waiting for more data from the server, which might never arrive. To fix, distinguish between out-of-memory error and not-enough-data cases, and give a proper error message back to the client on OOM. There are still similar issues in handling COPY start messages, but let's handle that as a separate patch. Michael Paquier, Amit Kapila and me. Backpatch to all supported versions. 14 December 2015, 16:48:49 UTC
476c54b Correct statement to actually be the intended assert statement. e3f4cfc7 introduced a LWLockHeldByMe() call, without the corresponding Assert() surrounding it. Spotted by Coverity. Backpatch: 9.1+, like the previous commit 14 December 2015, 10:24:53 UTC
20f85bc Docs: document that psql's "\i -" means read from stdin. This has worked that way for a long time, maybe always, but you would not have known it from the documentation. Also back-patch the notes I added to HEAD earlier today about behavior of the "-f -" switch, which likewise have been valid for many releases. 14 December 2015, 04:42:54 UTC
f2ce8f2 Doc: update external URLs for PostGIS project. Paul Ramsey 13 December 2015, 01:02:30 UTC
5f9a86b Fix ALTER TABLE ... SET TABLESPACE for unlogged relations. Changing the tablespace of an unlogged relation did not WAL log the creation and content of the init fork. Thus, after a standby is promoted, unlogged relation cannot be accessed anymore, with errors like: ERROR: 58P01: could not open file "pg_tblspc/...": No such file or directory Additionally the init fork was not synced to disk, independent of the configured wal_level, a relatively small durability risk. Investigation of that problem also brought to light that, even for permanent relations, the creation of !main forks was not WAL logged, i.e. no XLOG_SMGR_CREATE record were emitted. That mostly turns out not to be a problem, because these files were created when the actual relation data is copied; nonexistent files are not treated as an error condition during replay. But that doesn't work for empty files, and generally feels a bit haphazard. Luckily, outside init and main forks, empty forks don't occur often or are not a problem. Add the required WAL logging and syncing to disk. Reported-By: Michael Paquier Author: Michael Paquier and Andres Freund Discussion: 20151210163230.GA11331@alap3.anarazel.de Backpatch: 9.1, where unlogged relations were introduced 12 December 2015, 13:19:29 UTC
386dcd5 Add an expected-file to match behavior of latest libxml2. Recent releases of libxml2 do not provide error context reports for errors detected at the very end of the input string. This appears to be a bug, or at least an infelicity, introduced by the fix for libxml2's CVE-2015-7499. We can hope that this behavioral change will get undone before too long; but the security patch is likely to spread a lot faster/further than any follow-on cleanup, which means this behavior is likely to be present in the wild for some time to come. As a stopgap, add a variant regression test expected-file that matches what you get with a libxml2 that acts this way. 12 December 2015, 00:08:40 UTC
f44c520 For REASSIGN OWNED for foreign user mappings As reported in bug #13809 by Alexander Ashurkov, the code for REASSIGN OWNED hadn't gotten word about user mappings. Deal with them in the same way default ACLs do, which is to ignore them altogether; they are handled just fine by DROP OWNED. The other foreign object cases are already handled correctly by both commands. Also add a REASSIGN OWNED statement to foreign_data test to exercise the foreign data objects. (The changes are just before the "cleanup" phase, so it shouldn't remove any existing live test.) Reported by Alexander Ashurkov, then independently by Jaime Casanova. 11 December 2015, 21:39:09 UTC
2a37a10 Install our "missing" script where PGXS builds can find it. This allows sane behavior in a PGXS build done on a machine where build tools such as bison are missing. Jim Nasby 11 December 2015, 21:14:48 UTC
3199c13 Fix bug leading to restoring unlogged relations from empty files. At the end of crash recovery, unlogged relations are reset to the empty state, using their init fork as the template. The init fork is copied to the main fork without going through shared buffers. Unfortunately WAL replay so far has not necessarily flushed writes from shared buffers to disk at that point. In normal crash recovery, and before the introduction of 'fast promotions' in fd4ced523 / 9.3, the END_OF_RECOVERY checkpoint flushes the buffers out in time. But with fast promotions that's not the case anymore. To fix, force WAL writes targeting the init fork to be flushed immediately (using the new FlushOneBuffer() function). In 9.5+ that flush can centrally be triggered from the code dealing with restoring full page writes (XLogReadBufferForRedoExtended), in earlier releases that responsibility is in the hands of XLOG_HEAP_NEWPAGE's replay function. Backpatch to 9.1, even if this currently is only known to trigger in 9.3+. Flushing earlier is more robust, and it is advantageous to keep the branches similar. Typical symptoms of this bug are errors like 'ERROR: index "..." contains unexpected zero page at block 0' shortly after promoting a node. Reported-By: Thom Brown Author: Andres Freund and Michael Paquier Discussion: 20150326175024.GJ451@alap3.anarazel.de Backpatch: 9.1- 10 December 2015, 15:29:27 UTC
f9fc8e7 Further improve documentation of the role-dropping process. In commit 1ea0c73c2 I added a section to user-manag.sgml about how to drop roles that own objects; but as pointed out by Stephen Frost, I neglected that shared objects (databases or tablespaces) may need special treatment. Fix that. Back-patch to supported versions, like the previous patch. 04 December 2015, 19:44:39 UTC
7882143 Make gincostestimate() cope with hypothetical GIN indexes. We tried to fetch statistics data from the index metapage, which does not work if the index isn't actually present. If the index is hypothetical, instead extrapolate some plausible internal statistics based on the index page count provided by the index-advisor plugin. There was already some code in gincostestimate() to invent internal stats in this way, but since it was only meant as a stopgap for pre-9.1 GIN indexes that hadn't been vacuumed since upgrading, it was pretty crude. If we want it to support index advisors, we should try a little harder. A small amount of testing says that it's better to estimate the entry pages as 90% of the index, not 100%. Also, estimating the number of entries (keys) as equal to the heap tuple count could be wildly wrong in either direction. Instead, let's estimate 100 entries per entry page. Perhaps someday somebody will want the index advisor to be able to provide these numbers more directly, but for the moment this should serve. Problem report and initial patch by Julien Rouhaud; modified by me to invent less-bogus internal statistics. Back-patch to all supported branches, since we've supported index advisors since 9.0. 01 December 2015, 21:24:35 UTC
8438749 Use "g" not "f" format in ecpg's PGTYPESnumeric_from_double(). The previous coding could overrun the provided buffer size for a very large input, or lose precision for a very small input. Adopt the methodology that's been in use in the equivalent backend code for a long time. Per private report from Bas van Schaik. Back-patch to all supported branches. 01 December 2015, 16:42:52 UTC
cb7ea8d Fix failure to consider failure cases in GetComboCommandId(). Failure to initially palloc the comboCids array, or to realloc it bigger when needed, left combocid's data structures in an inconsistent state that would cause trouble if the top transaction continues to execute. Noted while examining a user complaint about the amount of memory used for this. (There's not much we can do about that, but it does point up that repalloc failure has a non-negligible chance of occurring here.) In HEAD/9.5, also avoid possible invocation of memcpy() with a null pointer in SerializeComboCIDState; cf commit 13bba0227. 26 November 2015, 18:23:03 UTC
6430a11 Be more paranoid about null return values from libpq status functions. PQhost() can return NULL in non-error situations, namely when a Unix-socket connection has been selected by default. That behavior is a tad debatable perhaps, but for the moment we should make sure that psql copes with it. Unfortunately, do_connect() failed to: it could pass a NULL pointer to strcmp(), resulting in crashes on most platforms. This was reported as a security issue by ChenQin of Topsec Security Team, but the consensus of the security list is that it's just a garden-variety bug with no security implications. For paranoia's sake, I made the keep_password test not trust PQuser or PQport either, even though I believe those will never return NULL given a valid PGconn. Back-patch to all supported branches. 25 November 2015, 22:31:54 UTC
c36064e pg_upgrade: fix CopyFile() on Windows to fail on file existence Also fix getErrorText() to return the right error string on failure. This behavior now matches that of other operating systems. Report by Noah Misch Backpatch through 9.1 24 November 2015, 22:18:27 UTC
6df62ef Fix Windows builds in back branches. I missed adding src/port/tar.c to the Windows build files when back-patching the addition of that file to 9.2 and 9.1. Per buildfarm. 23 November 2015, 05:32:01 UTC
8f1559a Adopt the GNU convention for handling tar-archive members exceeding 8GB. The POSIX standard for tar headers requires archive member sizes to be printed in octal with at most 11 digits, limiting the representable file size to 8GB. However, GNU tar and apparently most other modern tars support a convention in which oversized values can be stored in base-256, allowing any practical file to be a tar member. Adopt this convention to remove two limitations: * pg_dump with -Ft output format failed if the contents of any one table exceeded 8GB. * pg_basebackup failed if the data directory contained any file exceeding 8GB. (This would be a fatal problem for installations configured with a table segment size of 8GB or more, and it has also been seen to fail when large core dump files exist in the data directory.) File sizes under 8GB are still printed in octal, so that no compatibility issues are created except in cases that would have failed entirely before. In addition, this patch fixes several bugs in the same area: * In 9.3 and later, we'd defined tarCreateHeader's file-size argument as size_t, which meant that on 32-bit machines it would write a corrupt tar header for file sizes between 4GB and 8GB, even though no error was raised. This broke both "pg_dump -Ft" and pg_basebackup for such cases. * pg_restore from a tar archive would fail on tables of size between 4GB and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits. This happened even with an archive file not affected by the previous bug. * pg_basebackup would fail if there were files of size between 4GB and 8GB, even on 64-bit machines. * In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size, on 64-bit big-endian machines. In view of these potential data-loss bugs, back-patch to all supported branches, even though removal of the documented 8GB limit might otherwise be considered a new feature rather than a bug fix. 22 November 2015, 01:21:32 UTC
60ba32c Fix handling of inherited check constraints in ALTER COLUMN TYPE (again). The previous way of reconstructing check constraints was to do a separate "ALTER TABLE ONLY tab ADD CONSTRAINT" for each table in an inheritance hierarchy. However, that way has no hope of reconstructing the check constraints' own inheritance properties correctly, as pointed out in bug #13779 from Jan Dirk Zijlstra. What we should do instead is to do a regular "ALTER TABLE", allowing recursion, at the topmost table that has a particular constraint, and then suppress the work queue entries for inherited instances of the constraint. Annoyingly, we'd tried to fix this behavior before, in commit 5ed6546cf, but we failed to notice that it wasn't reconstructing the pg_constraint field values correctly. As long as I'm touching pg_get_constraintdef_worker anyway, tweak it to always schema-qualify the target table name; this seems like useful backup to the protections installed by commit 5f173040. In HEAD/9.5, get rid of get_constraint_relation_oids, which is now unused. (I could alternatively have modified it to also return conislocal, but that seemed like a pretty single-purpose API, so let's not pretend it has some other use.) It's unused in the back branches as well, but I left it in place just in case some third-party code has decided to use it. In HEAD/9.5, also rename pg_get_constraintdef_string to pg_get_constraintdef_command, as the previous name did nothing to explain what that entry point did differently from others (and its comment was equally useless). Again, that change doesn't seem like material for back-patching. I did a bit of re-pgindenting in tablecmds.c in HEAD/9.5, as well. Otherwise, back-patch to all supported branches. 20 November 2015, 19:55:29 UTC
b4afc39 Accept flex > 2.5.x in configure. Per buildfarm member anchovy, 2.6.0 exists in the wild now. Hopefully it works with Postgres; if not, we'll have to do something about that, but in any case claiming it's "too old" is pretty silly. 18 November 2015, 22:45:06 UTC
728a2ac Fix possible internal overflow in numeric division. div_var_fast() postpones propagating carries in the same way as mul_var(), so it has the same corner-case overflow risk we fixed in 246693e5ae8a36f0, namely that the size of the carries has to be accounted for when setting the threshold for executing a carry propagation step. We've not devised a test case illustrating the brokenness, but the required fix seems clear enough. Like the previous fix, back-patch to all active branches. Dean Rasheed 17 November 2015, 20:47:12 UTC
7b21d1b Fix ruleutils.c's dumping of whole-row Vars in ROW() and VALUES() contexts. Normally ruleutils prints a whole-row Var as "foo.*". We already knew that that doesn't work at top level of a SELECT list, because the parser would treat the "*" as a directive to expand the reference into separate columns, not a whole-row Var. However, Joshua Yanovski points out in bug #13776 that the same thing happens at top level of a ROW() construct; and some nosing around in the parser shows that the same is true in VALUES(). Hence, apply the same workaround already devised for the SELECT-list case, namely to add a forced cast to the appropriate rowtype in these cases. (The alternative of just printing "foo" was rejected because it is difficult to avoid ambiguity against plain columns named "foo".) Back-patch to all supported branches. 15 November 2015, 19:41:09 UTC
bdcbc2b pg_upgrade: properly detect file copy failure on Windows Previously, file copy failures were ignored on Windows due to an incorrect return value check. Report by Manu Joye Backpatch through 9.1 14 November 2015, 16:47:11 UTC
7fe1d1c Improve our workaround for 'TeX capacity exceeded' in building PDF files. In commit a5ec86a7c787832d28d5e50400ec96a5190f2555 I wrote a quick hack that reduced the number of TeX string pool entries created while converting our documentation to PDF form. That held the fort for awhile, but as of HEAD we're back up against the same limitation. It turns out that the original coding of \FlowObjectSetup actually results in *three* string pool entries being generated for every "flow object" (that is, potential cross-reference target) in the documentation, and my previous hack only got rid of one of them. With a little more care, we can reduce the string count to one per flow object plus one per actually-cross-referenced flow object (about 115000 + 5000 as of current HEAD); that should work until the documentation volume roughly doubles from where it is today. As a not-incidental side benefit, this change also causes pdfjadetex to stop emitting unreferenced hyperlink anchors (bookmarks) into the PDF file. It had been making one willy-nilly for every flow object; now it's just one per actually-cross-referenced object. This results in close to a 2X savings in PDF file size. We will still want to run the output through "jpdftweak" to get it to be compressed; but we no longer need removal of unreferenced bookmarks, so we might be able to find a quicker tool for that step. Although the failure only affects HEAD and US-format output at the moment, 9.5 cannot be more than a few pages short of failing likewise, so it will inevitably fail after a few rounds of minor-version release notes. I don't have a lot of faith that we'll never hit the limit in the older branches; and anyway it would be nice to get rid of jpdftweak across the board. Therefore, back-patch to all supported branches. 10 November 2015, 21:00:35 UTC
87deb55 Don't connect() to a wildcard address in test_postmaster_connection(). At least OpenBSD, NetBSD, and Windows don't support it. This repairs pg_ctl for listen_addresses='0.0.0.0' and listen_addresses='::'. Since pg_ctl prefers to test a Unix-domain socket, Windows users are most likely to need this change. Back-patch to 9.1 (all supported versions). This could change pg_ctl interaction with loopback-interface firewall rules. Therefore, in 9.4 and earlier (released branches), activate the change only on known-affected platforms. Reported (bug #13611) and designed by Kondo Yuta. 08 November 2015, 22:31:24 UTC
03ee659 Fix enforcement of restrictions inside regexp lookaround constraints. Lookahead and lookbehind constraints aren't allowed to contain backrefs, and parentheses within them are always considered non-capturing. Or so says the manual. But the regexp parser forgot about these rules once inside a parenthesized subexpression, so that constructs like (\w)(?=(\1)) were accepted (but then not correctly executed --- a case like this acted like (\w)(?=\w), without any enforcement that the two \w's match the same text). And in (?=((foo))) the innermost parentheses would be counted as capturing parentheses, though no text would ever be captured for them. To fix, properly pass down the "type" argument to the recursive invocation of parse(). Back-patch to all supported branches; it was agreed that silent misexecution of such patterns is worse than throwing an error, even though new errors in minor releases are generally not desirable. 07 November 2015, 17:43:24 UTC
08322da Fix serialization anomalies due to race conditions on INSERT. On insert the CheckForSerializableConflictIn() test was performed before the page(s) which were going to be modified had been locked (with an exclusive buffer content lock). If another process acquired a relation SIReadLock on the heap and scanned to a page on which an insert was going to occur before the page was so locked, a rw-conflict would be missed, which could allow a serialization anomaly to be missed. The window between the check and the page lock was small, so the bug was generally not noticed unless there was high concurrency with multiple processes inserting into the same table. This was reported by Peter Bailis as bug #11732, by Sean Chittenden as bug #13667, and by others. The race condition was eliminated in heap_insert() by moving the check down below the acquisition of the buffer lock, which had been the very next statement. Because of the loop locking and unlocking multiple buffers in heap_multi_insert() a check was added after all inserts were completed. The check before the start of the inserts was left because it might avoid a large amount of work to detect a serialization anomaly before performing the all of the inserts and the related WAL logging. While investigating this bug, other SSI bugs which were even harder to hit in practice were noticed and fixed, an unnecessary check (covered by another check, so redundant) was removed from heap_update(), and comments were improved. Back-patch to all supported branches. Kevin Grittner and Thomas Munro 31 October 2015, 19:36:58 UTC
b97a41a Fix back-patch of commit 8e3b4d9d40244c037bbc6e182ea3fabb9347d482. master emits an extra context message compared to 9.5 and earlier. 20 October 2015, 04:58:47 UTC
91d62b1 Eschew "RESET statement_timeout" in tests. Instead, use transaction abort. Given an unlucky bout of latency, the timeout would cancel the RESET itself. Buildfarm members gharial, lapwing, mereswine, shearwater, and sungazer witness that. Back-patch to 9.1 (all supported versions). The query_canceled test still could timeout before entering its subtransaction; for whatever reason, that has yet to happen on the buildfarm. 20 October 2015, 04:37:55 UTC
0ce829c Fix incorrect handling of lookahead constraints in pg_regprefix(). pg_regprefix was doing nothing with lookahead constraints, which would be fine if it were the right kind of nothing, but it isn't: we have to terminate our search for a fixed prefix, not just pretend the LACON arc isn't there. Otherwise, if the current state has both a LACON outarc and a single plain-color outarc, we'd falsely conclude that the color represents an addition to the fixed prefix, and generate an extracted index condition that restricts the indexscan too much. (See added regression test case.) Terminating the search is conservative: we could traverse the LACON arc (thus assuming that the constraint can be satisfied at runtime) and then examine the outarcs of the linked-to state. But that would be a lot more work than it seems worth, because writing a LACON followed by a single plain character is a pretty silly thing to do. This makes a difference only in rather contrived cases, but it's a bug, so back-patch to all supported branches. 19 October 2015, 20:54:54 UTC
a9bcd83 Fix order of arguments in ecpg generated typedef command. 18 October 2015, 08:17:12 UTC
4083a52 Miscellaneous cleanup of regular-expression compiler. Revert our previous addition of "all" flags to copyins() and copyouts(); they're no longer needed, and were never anything but an unsightly hack. Improve a couple of infelicities in the REG_DEBUG code for dumping the NFA data structure, including adding code to count the total number of states and arcs. Add a couple of missed error checks. Add some more documentation in the README file, and some regression tests illustrating cases that exceeded the state-count limit and/or took unreasonable amounts of time before this set of patches. Back-patch to all supported branches. 16 October 2015, 19:52:12 UTC
b94c2b6 Improve memory-usage accounting in regular-expression compiler. This code previously counted the number of NFA states it created, and complained if a limit was exceeded, so as to prevent bizarre regex patterns from consuming unreasonable time or memory. That's fine as far as it went, but the code paid no attention to how many arcs linked those states. Since regexes can be contrived that have O(N) states but will need O(N^2) arcs after fixempties() processing, it was still possible to blow out memory, and take a long time doing it too. To fix, modify the bookkeeping to count space used by both states and arcs. I did not bother with including the "color map" in the accounting; it can only grow to a few megabytes, which is not a lot in comparison to what we're allowing for states+arcs (about 150MB on 64-bit machines or half that on 32-bit machines). Looking at some of the larger real-world regexes captured in the Tcl regression test suite suggests that the most that is likely to be needed for regexes found in the wild is under 10MB, so I believe that the current limit has enough headroom to make it okay to keep it as a hard-wired limit. In connection with this, redefine REG_ETOOBIG as meaning "regular expression is too complex"; the previous wording of "nfa has too many states" was already somewhat inapropos because of the error code's use for stack depth overrun, and it was not very user-friendly either. Back-patch to all supported branches. 16 October 2015, 19:36:17 UTC
067f96f Improve performance of pullback/pushfwd in regular-expression compiler. The previous coding would create a new intermediate state every time it wanted to interchange the ordering of two constraint arcs. Certain regex features such as \Y can generate large numbers of parallel constraint arcs, and if we needed to reorder the results of that, we created unreasonable numbers of intermediate states. To improve matters, keep a list of already-created intermediate states associated with the state currently being considered by the outer loop; we can re-use such states to place all the new arcs leading to the same destination or source. I also took the trouble to redefine push() and pull() to have a less risky API: they no longer delete any state or arc that the caller might possibly have a pointer to, except for the specifically-passed constraint arc. This reduces the risk of re-introducing the same type of error seen in the failed patch for CVE-2007-4772. Back-patch to all supported branches. 16 October 2015, 19:11:49 UTC
5503e6e Improve performance of fixempties() pass in regular-expression compiler. The previous coding took something like O(N^4) time to fully process a chain of N EMPTY arcs. We can't really do much better than O(N^2) because we have to insert about that many arcs, but we can do lots better than what's there now. The win comes partly from using mergeins() to amortize de-duplication of arcs across multiple source states, and partly from exploiting knowledge of the ordering of arcs for each state to avoid looking at arcs we don't need to consider during the scan. We do have to be a bit careful of the possible reordering of arcs introduced by the sort-merge coding of the previous commit, but that's not hard to deal with. Back-patch to all supported branches. 16 October 2015, 18:58:11 UTC
b00c79b Fix O(N^2) performance problems in regular-expression compiler. Change the singly-linked in-arc and out-arc lists to be doubly-linked, so that arc deletion is constant time rather than having worst-case time proportional to the number of other arcs on the connected states. Modify the bulk arc transfer operations copyins(), copyouts(), moveins(), moveouts() so that they use a sort-and-merge algorithm whenever there's more than a small number of arcs to be copied or moved. The previous method is O(N^2) in the number of arcs involved, because it performs duplicate checking independently for each copied arc. The new method may change the ordering of existing arcs for the destination state, but nothing really cares about that. Provide another bulk arc copying method mergeins(), which is unused as of this commit but is needed for the next one. It basically is like copyins(), but the source arcs might not all come from the same state. Replace the O(N^2) bubble-sort algorithm used in carcsort() with a qsort() call. These changes greatly improve the performance of regex compilation for large or complex regexes, at the cost of extra space for arc storage during compilation. The original tradeoff was probably fine when it was made, but now we care more about speed and less about memory consumption. Back-patch to all supported branches. 16 October 2015, 18:43:18 UTC
d394f12 Fix regular-expression compiler to handle loops of constraint arcs. It's possible to construct regular expressions that contain loops of constraint arcs (that is, ^ $ AHEAD BEHIND or LACON arcs). There's no use in fully traversing such a loop at execution, since you'd just end up in the same NFA state without having consumed any input. Worse, such a loop leads to infinite looping in the pullback/pushfwd stage of compilation, because we keep pushing or pulling the same constraints around the loop in a vain attempt to move them to the pre or post state. Such looping was previously recognized in CVE-2007-4772; but the fix only handled the case of trivial single-state loops (that is, a constraint arc leading back to its source state) ... and not only that, it was incorrect even for that case, because it broke the admittedly-not-very-clearly-stated API contract of the pull() and push() subroutines. The first two regression test cases added by this commit exhibit patterns that result in assertion failures because of that (though there seem to be no ill effects in non-assert builds). The other new test cases exhibit multi-state constraint loops; in an unpatched build they will run until the NFA state-count limit is exceeded. To fix, remove the code added for CVE-2007-4772, and instead create a general-purpose constraint-loop-breaking phase of regex compilation that executes before we do pullback/pushfwd. Since we never need to traverse a constraint loop fully, we can just break the loop at any chosen spot, if we add clone states that can replicate any sequence of arc transitions that would've traversed just part of the loop. Also add some commentary clarifying why we have to have all these machinations in the first place. This class of problems has been known for some time --- we had a report from Marc Mamin about two years ago, for example, and there are related complaints in the Tcl bug tracker. I had discussed a fix of this kind off-list with Henry Spencer, but didn't get around to doing something about it until the issue was rediscovered by Greg Stark recently. Back-patch to all supported branches. 16 October 2015, 18:14:41 UTC
b0d8583 On Windows, ensure shared memory handle gets closed if not being used. Postmaster child processes that aren't supposed to be attached to shared memory were not bothering to close the shared memory mapping handle they inherit from the postmaster process. That's mostly harmless, since the handle vanishes anyway when the child process exits -- but the syslogger process, if used, doesn't get killed and restarted during recovery from a backend crash. That meant that Windows doesn't see the shared memory mapping as becoming free, so it doesn't delete it and the postmaster is unable to create a new one, resulting in failure to recover from crashes whenever logging_collector is turned on. Per report from Dmitry Vasilyev. It's a bit astonishing that we'd not figured this out long ago, since it's been broken from the very beginnings of out native Windows support; probably some previously-unexplained trouble reports trace to this. A secondary problem is that on Cygwin (perhaps only in older versions?), exec() may not detach from the shared memory segment after all, in which case these child processes did remain attached to shared memory, posing the risk of an unexpected shared memory clobber if they went off the rails somehow. That may be a long-gone bug, but we can deal with it now if it's still live, by detaching within the infrastructure introduced here to deal with closing the handle. Back-patch to all supported branches. Tom Lane and Amit Kapila 13 October 2015, 15:21:33 UTC
c869a7d Fix "pg_ctl start -w" to test child process status directly. pg_ctl start with -w previously relied on a heuristic that the postmaster would surely always manage to create postmaster.pid within five seconds. Unfortunately, that fails much more often than we would like on some of the slower, more heavily loaded buildfarm members. We have known for quite some time that we could remove the need for that heuristic on Unix by using fork/exec instead of system() to launch the postmaster. This allows us to know the exact PID of the postmaster, which allows near-certain verification that the postmaster.pid file is the one we want and not a leftover, and it also lets us use waitpid() to detect reliably whether the child postmaster has exited or not. What was blocking this change was not wanting to rewrite the Windows version of start_postmaster() to avoid use of CMD.EXE. That's doable in theory but would require fooling about with stdout/stderr redirection, and getting the handling of quote-containing postmaster switches to stay the same might be rather ticklish. However, we realized that we don't have to do that to fix the problem, because we can test whether the shell process has exited as a proxy for whether the postmaster is still alive. That doesn't allow an exact check of the PID in postmaster.pid, but we're no worse off than before in that respect; and we do get to get rid of the heuristic about how long the postmaster might take to create postmaster.pid. On Unix, this change means that a second "pg_ctl start -w" immediately after another such command will now reliably fail, whereas previously it would succeed if done within two seconds of the earlier command. Since that's a saner behavior anyway, it's fine. On Windows, the case can still succeed within the same time window, since pg_ctl can't tell that the earlier postmaster's postmaster.pid isn't the pidfile it is looking for. To ensure stable test results on Windows, we can insert a short sleep into the test script for pg_ctl, ensuring that the existing pidfile looks stale. This hack can be removed if we ever do rewrite start_postmaster(), but that no longer seems like a high-priority thing to do. Back-patch to all supported versions, both because the current behavior is buggy and because we must do that if we want the buildfarm failures to go away. Tom Lane and Michael Paquier 12 October 2015, 22:30:37 UTC
ef5f811 Improve documentation of the role-dropping process. In general one may have to run both REASSIGN OWNED and DROP OWNED to get rid of all the dependencies of a role to be dropped. This was alluded to in the REASSIGN OWNED man page, but not really spelled out in full; and in any case the procedure ought to be documented in a more prominent place than that. Add a section to the "Database Roles" chapter explaining this, and do a bit of wordsmithing in the relevant commands' man pages. 07 October 2015, 20:12:06 UTC
dea6da1 Perform an immediate shutdown if the postmaster.pid file is removed. The postmaster now checks every minute or so (worst case, at most two minutes) that postmaster.pid is still there and still contains its own PID. If not, it performs an immediate shutdown, as though it had received SIGQUIT. The original goal behind this change was to ensure that failed buildfarm runs would get fully cleaned up, even if the test scripts had left a postmaster running, which is not an infrequent occurrence. When the buildfarm script removes a test postmaster's $PGDATA directory, its next check on postmaster.pid will fail and cause it to exit. Previously, manual intervention was often needed to get rid of such orphaned postmasters, since they'd block new test postmasters from obtaining the expected socket address. However, by checking postmaster.pid and not something else, we can provide additional robustness: manual removal of postmaster.pid is a frequent DBA mistake, and now we can at least limit the damage that will ensue if a new postmaster is started while the old one is still alive. Back-patch to all supported branches, since we won't get the desired improvement in buildfarm reliability otherwise. 06 October 2015, 21:15:27 UTC
f0ceb25 Stamp 9.1.19. 05 October 2015, 19:17:54 UTC
2136934 doc: Update URLs of external projects 05 October 2015, 16:29:20 UTC
e01548b Translation updates Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 8e743278f47ca82f2af2c37eb8bb200bc8df2088 05 October 2015, 15:00:54 UTC
7bb63b2 Last-minute updates for release notes. Add entries for security and not-quite-security issues. Security: CVE-2015-5288, CVE-2015-5289 05 October 2015, 14:57:49 UTC
9383773 Remove outdated comment about relation level autovacuum freeze limits. The documentation for the autovacuum_multixact_freeze_max_age and autovacuum_freeze_max_age relation level parameters contained: "Note that while you can set autovacuum_multixact_freeze_max_age very small, or even zero, this is usually unwise since it will force frequent vacuuming." which hasn't been true since these options were made relation options, instead of residing in the pg_autovacuum table (834a6da4f7). Remove the outdated sentence. Even the lowered limits from 2596d70 are high enough that this doesn't warrant calling out the risk in the CREATE TABLE docs. Per discussion with Tom Lane and Alvaro Herrera Discussion: 26377.1443105453@sss.pgh.pa.us Backpatch: 9.0- (in parts) 05 October 2015, 14:51:04 UTC
879877b Prevent stack overflow in query-type functions. The tsquery, ltxtquery and query_int data types have a common ancestor. Having acquired check_stack_depth() calls independently, each was missing at least one call. Back-patch to 9.0 (all supported versions). 05 October 2015, 14:06:35 UTC
9581e26 Prevent stack overflow in container-type functions. A range type can name another range type as its subtype, and a record type can bear a column of another record type. Consequently, functions like range_cmp() and record_recv() are recursive. Functions at risk include operator family members and referents of pg_type regproc columns. Treat as recursive any such function that looks up and calls the same-purpose function for a record column type or the range subtype. Back-patch to 9.0 (all supported versions). An array type's element type is never itself an array type, so array functions are unaffected. Recursion depth proportional to array dimensionality, found in array_dim_to_jsonb(), is fine thanks to MAXDIM. 05 October 2015, 14:06:35 UTC
48f6310 pgcrypto: Detect and report too-short crypt() salts. Certain short salts crashed the backend or disclosed a few bytes of backend memory. For existing salt-induced error conditions, emit a message saying as much. Back-patch to 9.0 (all supported versions). Josh Kupershmidt Security: CVE-2015-5288 05 October 2015, 14:06:35 UTC
7116a3e Re-Align *_freeze_max_age reloption limits with corresponding GUC limits. In 020235a5754 I lowered the autovacuum_*freeze_max_age minimums to allow for easier testing of wraparounds. I did not touch the corresponding per-table limits. While those don't matter for the purpose of wraparound, it seems more consistent to lower them as well. It's noteworthy that the previous reloption lower limit for autovacuum_multixact_freeze_max_age was too high by one magnitude, even before 020235a5754. Discussion: 26377.1443105453@sss.pgh.pa.us Backpatch: back to 9.0 (in parts), like the prior patch 05 October 2015, 09:57:20 UTC
2be5a44 Release notes for 9.5beta1, 9.4.5, 9.3.10, 9.2.14, 9.1.19, 9.0.23. 04 October 2015, 23:38:01 UTC
d84cc40 Further twiddling of nodeHash.c hashtable sizing calculation. On reflection, the submitted patch didn't really work to prevent the request size from exceeding MaxAllocSize, because of the fact that we'd happily round nbuckets up to the next power of 2 after we'd limited it to max_pointers. The simplest way to enforce the limit correctly is to round max_pointers down to a power of 2 when it isn't one already. (Note that the constraint to INT_MAX / 2, if it were doing anything useful at all, is properly applied after that.) 04 October 2015, 19:55:07 UTC
a8168fb Fix possible "invalid memory alloc request size" failure in nodeHash.c. Limit the size of the hashtable pointer array to not more than MaxAllocSize. We've seen reports of failures due to this in HEAD/9.5, and it seems possible in older branches as well. The change in NTUP_PER_BUCKET in 9.5 may have made the problem more likely, but surely it didn't introduce it. Tomas Vondra, slightly modified by me 04 October 2015, 18:17:24 UTC
3a68e0a Update time zone data files to tzdata release 2015g. DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island, North Korea, Turkey, Uruguay. New zone America/Fort_Nelson for Canadian Northern Rockies. 02 October 2015, 23:16:29 UTC
f80af01 Add recursion depth protection to LIKE matching. Since MatchText() recurses, it could in principle be driven to stack overflow, although quite a long pattern would be needed. 02 October 2015, 19:00:52 UTC
e7de1bc Add recursion depth protections to regular expression matching. Some of the functions in regex compilation and execution recurse, and therefore could in principle be driven to stack overflow. The Tcl crew has seen this happen in practice in duptraverse(), though their fix was to put in a hard-wired limit on the number of recursive levels, which is not too appetizing --- fortunately, we have enough infrastructure to check the actually available stack. Greg Stark has also seen it in other places while fuzz testing on a machine with limited stack space. Let's put guards in to prevent crashes in all these places. Since the regex code would leak memory if we simply threw elog(ERROR), we have to introduce an API that checks for stack depth without throwing such an error. Fortunately that's not difficult. 02 October 2015, 18:51:59 UTC
6301549 Fix potential infinite loop in regular expression execution. In cfindloop(), if the initial call to shortest() reports that a zero-length match is possible at the current search start point, but then it is unable to construct any actual match to that, it'll just loop around with the same start point, and thus make no progress. We need to force the start point to be advanced. This is safe because the loop over "begin" points has already tried and failed to match starting at "close", so there is surely no need to try that again. This bug was introduced in commit e2bd904955e2221eddf01110b1f25002de2aaa83, wherein we allowed continued searching after we'd run out of match possibilities, but evidently failed to think hard enough about exactly where we needed to search next. Because of the way this code works, such a match failure is only possible in the presence of backrefs --- otherwise, shortest()'s judgment that a match is possible should always be correct. That probably explains how come the bug has escaped detection for several years. The actual fix is a one-liner, but I took the trouble to add/improve some comments related to the loop logic. After fixing that, the submitted test case "()*\1" didn't loop anymore. But it reported failure, though it seems like it ought to match a zero-length string; both Tcl and Perl think it does. That seems to be from overenthusiastic optimization on my part when I rewrote the iteration match logic in commit 173e29aa5deefd9e71c183583ba37805c8102a72: we can't just "declare victory" for a zero-length match without bothering to set match data for capturing parens inside the iterator node. Per fuzz testing by Greg Stark. The first part of this is a bug in all supported branches, and the second part is a bug since 9.2 where the iteration rewrite happened. 02 October 2015, 18:26:36 UTC
da8ff29 Add some more query-cancel checks to regular expression matching. Commit 9662143f0c35d64d7042fbeaf879df8f0b54be32 added infrastructure to allow regular-expression operations to be terminated early in the event of SIGINT etc. However, fuzz testing by Greg Stark disclosed that there are still cases where regex compilation could run for a long time without noticing a cancel request. Specifically, the fixempties() phase never adds new states, only new arcs, so it doesn't hit the cancel check I'd put in newstate(). Add one to newarc() as well to cover that. Some experimentation of my own found that regex execution could also run for a long time despite a pending cancel. We'd put a high-level cancel check into cdissect(), but there was none inside the core text-matching routines longest() and shortest(). Ordinarily those inner loops are very very fast ... but in the presence of lookahead constraints, not so much. As a compromise, stick a cancel check into the stateset cache-miss function, which is enough to guarantee a cancel check at least once per lookahead constraint test. Making this work required more attention to error handling throughout the regex executor. Henry Spencer had apparently originally intended longest() and shortest() to be incapable of incurring errors while running, so neither they nor their subroutines had well-defined error reporting behaviors. However, that was already broken by the lookahead constraint feature, since lacon() can surely suffer an out-of-memory failure --- which, in the code as it stood, might never be reported to the user at all, but just silently be treated as a non-match of the lookahead constraint. Normalize all that by inserting explicit error tests as needed. I took the opportunity to add some more comments to the code, too. Back-patch to all supported branches, like the previous patch. 02 October 2015, 17:45:39 UTC
3b0c1d9 Docs: add disclaimer about hazards of using regexps from untrusted sources. It's not terribly hard to devise regular expressions that take large amounts of time and/or memory to process. Recent testing by Greg Stark has also shown that machines with small stack limits can be driven to stack overflow by suitably crafted regexps. While we intend to fix these things as much as possible, it's probably impossible to eliminate slow-execution cases altogether. In any case we don't want to treat such things as security issues. The history of that code should already discourage prudent DBAs from allowing execution of regexp patterns coming from possibly-hostile sources, but it seems like a good idea to warn about the hazard explicitly. Currently, similar_escape() allows access to enough of the underlying regexp behavior that the warning has to apply to SIMILAR TO as well. We might be able to make it safer if we tightened things up to allow only SQL-mandated capabilities in SIMILAR TO; but that would be a subtly non-backwards-compatible change, so it requires discussion and probably could not be back-patched. Per discussion among pgsql-security list. 02 October 2015, 17:30:43 UTC
b44a55f Fix documentation error in commit 8703059c6b55c427100e00a09f66534b6ccbfaa1. Etsuro Fujita spotted a thinko in the README commentary. 01 October 2015, 14:31:45 UTC
2bbe8a6 Improve LISTEN startup time when there are many unread notifications. If some existing listener is far behind, incoming new listener sessions would start from that session's read pointer and then need to advance over many already-committed notification messages, which they have no interest in. This was expensive in itself and also thrashed the pg_notify SLRU buffers a lot more than necessary. We can improve matters considerably in typical scenarios, without much added cost, by starting from the furthest-ahead read pointer, not the furthest-behind one. We do have to consider only sessions in our own database when doing this, which requires an extra field in the data structure, but that's a pretty small cost. Back-patch to 9.0 where the current LISTEN/NOTIFY logic was introduced. Matt Newell, slightly adjusted by me 01 October 2015, 03:32:23 UTC
ca6c2f8 Fix plperl to handle non-ASCII error message texts correctly. We were passing error message texts to croak() verbatim, which turns out not to work if the text contains non-ASCII characters; Perl mangles their encoding, as reported in bug #13638 from Michal Leinweber. To fix, convert the text into a UTF8-encoded SV first. It's hard to test this without risking failures in different database encodings; but we can follow the lead of plpython, which is already assuming that no-break space (U+00A0) has an equivalent in all encodings we care about running the regression tests in (cf commit 2dfa15de5). Back-patch to 9.1. The code is quite different in 9.0, and anyway it seems too risky to put something like this into 9.0's final minor release. Alex Hunsaker, with suggestions from Tim Bunce and Tom Lane 29 September 2015, 14:52:22 UTC
54499a1 Fix compiler warning about unused function in non-readline case. Backpatch to all live branches to keep the code in sync. 28 September 2015, 22:32:13 UTC
back to top