https://github.com/postgres/postgres

sort by:
Revision Author Date Message Commit Date
70f2e3e Last-minute updates for release notes. Revise description of CVE-2015-3166, in line with scaled-back patch. Change release date. Security: CVE-2015-3166 19 May 2015, 22:33:58 UTC
1334127 Revert error-throwing wrappers for the printf family of functions. This reverts commit 16304a013432931e61e623c8d85e9fe24709d9ba, except for its changes in src/port/snprintf.c; as well as commit cac18a76bb6b08f1ecc2a85e46c9d2ab82dd9d23 which is no longer needed. Fujii Masao reported that the previous commit caused failures in psql on OS X, since if one exits the pager program early while viewing a query result, psql sees an EPIPE error from fprintf --- and the wrapper function thought that was reason to panic. (It's a bit surprising that the same does not happen on Linux.) Further discussion among the security list concluded that the risk of other such failures was far too great, and that the one-size-fits-all approach to error handling embodied in the previous patch is unlikely to be workable. This leaves us again exposed to the possibility of the type of failure envisioned in CVE-2015-3166. However, that failure mode is strictly hypothetical at this point: there is no concrete reason to believe that an attacker could trigger information disclosure through the supposed mechanism. In the first place, the attack surface is fairly limited, since so much of what the backend does with format strings goes through stringinfo.c or psprintf(), and those already had adequate defenses. In the second place, even granting that an unprivileged attacker could control the occurrence of ENOMEM with some precision, it's a stretch to believe that he could induce it just where the target buffer contains some valuable information. So we concluded that the risk of non-hypothetical problems induced by the patch greatly outweighs the security risks. We will therefore revert, and instead undertake closer analysis to identify specific calls that may need hardening, rather than attempt a universal solution. We have kept the portion of the previous patch that improved snprintf.c's handling of errors when it calls the platform's sprintf(). That seems to be an unalloyed improvement. Security: CVE-2015-3166 19 May 2015, 22:16:58 UTC
b3288a6 Fix off-by-one error in Assertion. The point of the assertion is to ensure that the arrays allocated in stack are large enough, but the check was one item short. This won't matter in practice because MaxIndexTuplesPerPage is an overestimate, so you can't have that many items on a page in reality. But let's be tidy. Spotted by Anastasia Lubennikova. Backpatch to all supported versions, like the patch that added the assertion. 19 May 2015, 16:25:54 UTC
8c479a8 Stamp 9.3.7. 18 May 2015, 18:31:21 UTC
8388680 Fix error message in pre_sync_fname. The old one didn't include %m anywhere, and required extra translation. Report by Peter Eisentraut. Fix by me. Review by Tom Lane. 18 May 2015, 17:17:01 UTC
32f8d57 Last-minute updates for release notes. Add entries for security issues. Security: CVE-2015-3165 through CVE-2015-3167 18 May 2015, 16:09:02 UTC
7b758b7 pgcrypto: Report errant decryption as "Wrong key or corrupt data". This has been the predominant outcome. When the output of decrypting with a wrong key coincidentally resembled an OpenPGP packet header, pgcrypto could instead report "Corrupt data", "Not text data" or "Unsupported compression algorithm". The distinct "Corrupt data" message added no value. The latter two error messages misled when the decrypted payload also exhibited fundamental integrity problems. Worse, error message variance in other systems has enabled cryptologic attacks; see RFC 4880 section "14. Security Considerations". Whether these pgcrypto behaviors are likewise exploitable is unknown. In passing, document that pgcrypto does not resist side-channel attacks. Back-patch to 9.0 (all supported versions). Security: CVE-2015-3167 18 May 2015, 14:02:37 UTC
c669915 Check return values of sensitive system library calls. PostgreSQL already checked the vast majority of these, missing this handful that nearly cannot fail. If putenv() failed with ENOMEM in pg_GSS_recvauth(), authentication would proceed with the wrong keytab file. If strftime() returned zero in cache_locale_time(), using the unspecified buffer contents could lead to information exposure or a crash. Back-patch to 9.0 (all supported versions). Other unchecked calls to these functions, especially those in frontend code, pose negligible security concern. This patch does not address them. Nonetheless, it is always better to check return values whose specification provides for indicating an error. In passing, fix an off-by-one error in strftime_win32()'s invocation of WideCharToMultiByte(). Upon retrieving a value of exactly MAX_L10N_DATA bytes, strftime_win32() would overrun the caller's buffer by one byte. MAX_L10N_DATA is chosen to exceed the length of every possible value, so the vulnerable scenario probably does not arise. Security: CVE-2015-3166 18 May 2015, 14:02:37 UTC
34d21e7 Add error-throwing wrappers for the printf family of functions. All known standard library implementations of these functions can fail with ENOMEM. A caller neglecting to check for failure would experience missing output, information exposure, or a crash. Check return values within wrappers and code, currently just snprintf.c, that bypasses the wrappers. The wrappers do not return after an error, so their callers need not check. Back-patch to 9.0 (all supported versions). Popular free software standard library implementations do take pains to bypass malloc() in simple cases, but they risk ENOMEM for floating point numbers, positional arguments, large field widths, and large precisions. No specification demands such caution, so this commit regards every call to a printf family function as a potential threat. Injecting the wrappers implicitly is a compromise between patch scope and design goals. I would prefer to edit each call site to name a wrapper explicitly. libpq and the ECPG libraries would, ideally, convey errors to the caller rather than abort(). All that would be painfully invasive for a back-patched security fix, hence this compromise. Security: CVE-2015-3166 18 May 2015, 14:02:36 UTC
d5abbd1 Permit use of vsprintf() in PostgreSQL code. The next commit needs it. Back-patch to 9.0 (all supported versions). 18 May 2015, 14:02:36 UTC
f4c12b4 Prevent a double free by not reentering be_tls_close(). Reentering this function with the right timing caused a double free, typically crashing the backend. By synchronizing a disconnection with the authentication timeout, an unauthenticated attacker could achieve this somewhat consistently. Call be_tls_close() solely from within proc_exit_prepare(). Back-patch to 9.0 (all supported versions). Benkocs Norbert Attila Security: CVE-2015-3165 18 May 2015, 14:02:36 UTC
b9403de Translation updates Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 3ce9e5ca72c3948b4c592e82a5ddb9b69b97d14b 18 May 2015, 12:40:50 UTC
271a68b Fix typos 18 May 2015, 02:22:19 UTC
01d42ca Release notes for 9.4.2, 9.3.7, 9.2.11, 9.1.16, 9.0.20. 17 May 2015, 19:54:20 UTC
4e99359 pg_upgrade: properly handle timeline variables There is no behavior change here as we now always set the timeline to one. Report by Tom Lane Backpatch to 9.3 and 9.4 16 May 2015, 19:16:28 UTC
b054732 Fix docs typo I don't think "respectfully" is what was meant here ... 16 May 2015, 17:28:26 UTC
bffbeec pg_upgrade: force timeline 1 in the new cluster Previously, this prevented promoted standby servers from being upgraded because of a missing WAL history file. (Timeline 1 doesn't need a history file, and we don't copy WAL files anyway.) Report by Christian Echerer(?), Alexey Klyukin Backpatch through 9.0 16 May 2015, 04:40:18 UTC
4cfba53 pg_upgrade: only allow template0 to be non-connectable This patch causes pg_upgrade to error out during its check phase if: (1) template0 is marked connectable or (2) any other database is marked non-connectable This is done because, in the first case, pg_upgrade would fail because the pg_dumpall --globals restore would fail, and in the second case, the database would not be restored, leading to data loss. Report by Matt Landry (1), Stephen Frost (2) Backpatch through 9.0 16 May 2015, 04:10:03 UTC
4fd69e4 Update time zone data files to tzdata release 2015d. DST law changes in Egypt, Mongolia, Palestine. Historical corrections for Canada and Chile. Revised zone abbreviation for America/Adak (HST/HDT not HAST/HADT). 15 May 2015, 23:35:58 UTC
13a2b7b Docs: fix erroneous claim about max byte length of GB18030. This encoding has characters up to 4 bytes long, not 2. 14 May 2015, 18:59:00 UTC
96b676c Fix RBM_ZERO_AND_LOCK mode to not acquire lock on local buffers. Commit 81c45081 introduced a new RBM_ZERO_AND_LOCK mode to ReadBuffer, which takes a lock on the buffer before zeroing it. However, you cannot take a lock on a local buffer, and you got a segfault instead. The version of that patch committed to master included a check for !isLocalBuf, and therefore didn't crash, but oddly I missed that in the back-patched versions. This patch adds that check to the back-branches too. RBM_ZERO_AND_LOCK mode is only used during WAL replay, and in hash indexes. WAL replay only deals with shared buffers, so the only way to trigger the bug is with a temporary hash index. Reported by Artem Ignatyev, analysis by Tom Lane. 13 May 2015, 06:54:06 UTC
7d09fdf Fix incorrect checking of deferred exclusion constraint after a HOT update. If a row that potentially violates a deferred exclusion constraint is HOT-updated later in the same transaction, the exclusion constraint would be reported as violated when the check finally occurs, even if the row(s) the new row originally conflicted with have since been removed. This happened because the wrong TID was passed to check_exclusion_constraint(), causing the live HOT-updated row to be seen as a conflicting row rather than recognized as the row-under-test. Per bug #13148 from Evan Martin. It's been broken since exclusion constraints were invented, so back-patch to all supported branches. 11 May 2015, 16:25:45 UTC
ddebd21 Increase threshold for multixact member emergency autovac to 50%. Analysis by Noah Misch shows that the 25% threshold set by commit 53bb309d2d5a9432d2602c93ed18e58bd2924e15 is lower than any other, similar autovac threshold. While we don't know exactly what value will be optimal for all users, it is better to err a little on the high side than on the low side. A higher value increases the risk that users might exhaust the available space and start seeing errors before autovacuum can clean things up sufficiently, but a user who hits that problem can compensate for it by reducing autovacuum_multixact_freeze_max_age to a value dependent on their average multixact size. On the flip side, if the emergency cap imposed by that patch kicks in too early, the user will experience excessive wraparound scanning and will be unable to mitigate that problem by configuration. The new value will hopefully reduce the risk of such bad experiences while still providing enough headroom to avoid multixact member exhaustion for most users. Along the way, adjust the documentation to reflect the effects of commit 04e6d3b877e060d8445eb653b7ea26b1ee5cec6b, which taught autovacuum to run for multixact wraparound even when autovacuum is configured off. 11 May 2015, 16:16:51 UTC
543fbec Even when autovacuum=off, force it for members as we do in other cases. Thomas Munro, with some adjustments by me. 11 May 2015, 14:56:32 UTC
5bbac7e Advance the stop point for multixact offset creation only at checkpoint. Commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c advanced the stop point at vacuum time, but this has subsequently been shown to be unsafe as a result of analysis by myself and Thomas Munro and testing by Thomas Munro. The crux of the problem is that the SLRU deletion logic may get confused about what to remove if, at exactly the right time during the checkpoint process, the head of the SLRU crosses what used to be the tail. This patch, by me, fixes the problem by advancing the stop point only following a checkpoint. This has the additional advantage of making the removal logic work during recovery more like the way it works during normal running, which is probably good. At least one of the calls to DetermineSafeOldestOffset which this patch removes was already dead, because MultiXactAdvanceOldest is called only during recovery and DetermineSafeOldestOffset was set up to do nothing during recovery. That, however, is inconsistent with the principle that recovery and normal running should work similarly, and was confusing to boot. Along the way, fix some comments that previous patches in this area neglected to update. It's not clear to me whether there's any concrete basis for the decision to use only half of the multixact ID space, but it's neither necessary nor sufficient to prevent multixact member wraparound, so the comments should not say otherwise. 11 May 2015, 02:45:42 UTC
24aa77e Fix DetermineSafeOldestOffset for the case where there are no mxacts. Commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c failed to take into account the possibility that there might be no multixacts in existence at all. Report by Thomas Munro; patch by me. 11 May 2015, 01:47:41 UTC
3de791e Recommend include_realm=1 in docs As discussed, the default setting of include_realm=0 can be dangerous in multi-realm environments because it is then impossible to differentiate users with the same username but who are from two different realms. Recommend include_realm=1 and note that the default setting may change in a future version of PostgreSQL and therefore users may wish to explicitly set include_realm to avoid issues while upgrading. 08 May 2015, 23:40:06 UTC
596fb5a Teach autovacuum about multixact member wraparound. The logic introduced in commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c and repaired in commits 669c7d20e6374850593cb430d332e11a3992bbcf and 7be47c56af3d3013955c91c2877c08f2a0e3e6a2 helps to ensure that we don't overwrite old multixact member information while it is still needed, but a user who creates many large multixacts can still exhaust the member space (and thus start getting errors) while autovacuum stands idly by. To fix this, progressively ramp down the effective value (but not the actual contents) of autovacuum_multixact_freeze_max_age as member space utilization increases. This makes autovacuum more aggressive and also reduces the threshold for a manual VACUUM to perform a full-table scan. This patch leaves unsolved the problem of ensuring that emergency autovacuums are triggered even when autovacuum=off. We'll need to fix that via a separate patch. Thomas Munro and Robert Haas 08 May 2015, 16:55:14 UTC
83fbd9b Fix incorrect math in DetermineSafeOldestOffset. The old formula didn't have enough parentheses, so it would do the wrong thing, and it used / rather than % to find a remainder. The effect of these oversights is that the stop point chosen by the logic introduced in commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c might be rather meaningless. Thomas Munro, reviewed by Kevin Grittner, with a whitespace tweak by me. 07 May 2015, 15:16:41 UTC
ba3caee Properly send SCM status updates when shutting down service on Windows The Service Control Manager should be notified regularly during a shutdown that takes a long time. Previously we would increaes the counter, but forgot to actually send the notification to the system. The loop counter was also incorrectly initalized in the event that the startup of the system took long enough for it to increase, which could cause the shutdown process not to wait as long as expected. Krystian Bigaj, reviewed by Michael Paquier 07 May 2015, 13:09:32 UTC
cf7d5aa citext's regexp_matches() functions weren't documented, either. 05 May 2015, 20:11:13 UTC
ffac9f6 Fix incorrect declaration of citext's regexp_matches() functions. These functions should return SETOF TEXT[], like the core functions they are wrappers for; but they were incorrectly declared as returning just TEXT[]. This mistake had two results: first, if there was no match you got a scalar null result, whereas what you should get is an empty set (zero rows). Second, the 'g' flag was effectively ignored, since you would get only one result array even if there were multiple matches, as reported by Jeff Certain. While ignoring 'g' is a clear bug, the behavior for no matches might well have been thought to be the intended behavior by people who hadn't compared it carefully to the core regexp_matches() functions. So we should tread carefully about introducing this change in the back branches. Still, it clearly is a bug and so providing some fix is desirable. After discussion, the conclusion was to introduce the change in a 1.1 version of the citext extension (as we would need to do anyway); 1.0 still contains the incorrect behavior. 1.1 is the default and only available version in HEAD, but it is optional in the back branches, where 1.0 remains the default version. People wishing to adopt the fix in back branches will need to explicitly do ALTER EXTENSION citext UPDATE TO '1.1'. (I also provided a downgrade script in the back branches, so people could go back to 1.0 if necessary.) This should be called out as an incompatible change in the 9.5 release notes, although we'll also document it in the next set of back-branch release notes. The notes should mention that any views or rules that use citext's regexp_matches() functions will need to be dropped before upgrading to 1.1, and then recreated again afterwards. Back-patch to 9.1. The bug goes all the way back to citext's introduction in 8.4, but pre-9.1 there is no extension mechanism with which to manage the change. Given the lack of previous complaints it seems unnecessary to change this behavior in 9.0, anyway. 05 May 2015, 19:50:53 UTC
6fd6669 Fix some problems with patch to fsync the data directory. pg_win32_is_junction() was a typo for pgwin32_is_junction(). open() was used not only in a two-argument form, which breaks on Windows, but also where BasicOpenFile() should have been used. Per reports from Andrew Dunstan and David Rowley. 05 May 2015, 13:19:39 UTC
14de825 Recursively fsync() the data directory after a crash. Otherwise, if there's another crash, some writes from after the first crash might make it to disk while writes from before the crash fail to make it to disk. This could lead to data corruption. Back-patch to all supported versions. Abhijit Menon-Sen, reviewed by Andres Freund and slightly revised by me. 04 May 2015, 16:27:55 UTC
e60581f Fix pg_upgrade's multixact handling (again) We need to create the pg_multixact/offsets file deleted by pg_upgrade much earlier than we originally were: it was in TrimMultiXact(), which runs after we exit recovery, but it actually needs to run earlier than the first call to SetMultiXactIdLimit (before recovery), because that routine already wants to read the first offset segment. Per pg_upgrade trouble report from Jeff Janes. While at it, silence a compiler warning about a pointless assert that an unsigned variable was being tested non-negative. This was a signed constant in Thomas Munro's patch which I changed to unsigned before commit. Pointed out by Andres Freund. 30 April 2015, 16:55:06 UTC
cf0d888 Code review for multixact bugfix Reword messages, rename a confusingly named function. Per Robert Haas. 28 April 2015, 17:52:29 UTC
e2eda4b Protect against multixact members wraparound Multixact member files are subject to early wraparound overflow and removal: if the average multixact size is above a certain threshold (see note below) the protections against offset overflow are not enough: during multixact truncation at checkpoint time, some pg_multixact/members files would be removed because the server considers them to be old and not needed anymore. This leads to loss of files that are critical to interpret existing tuples's Xmax values. To protect against this, since we don't have enough info in pg_control and we can't modify it in old branches, we maintain shared memory state about the oldest value that we need to keep; we use this during new multixact creation to abort if an old still-needed file would get overwritten. This value is kept up to date by checkpoints, which makes it not completely accurate but should be good enough. We start emitting warnings sometime earlier, so that the eventual multixact-shutdown doesn't take DBAs completely by surprise (more precisely: once 20 members SLRU segments are remaining before shutdown.) On troublesome average multixact size: The threshold size depends on the multixact freeze parameters. The oldest age is related to the greater of multixact_freeze_table_age and multixact_freeze_min_age: anything older than that should be removed promptly by autovacuum. If autovacuum is keeping up with multixact freezing, the troublesome multixact average size is (2^32-1) / Max(freeze table age, freeze min age) or around 28 members per multixact. Having an average multixact size larger than that will eventually cause new multixact data to overwrite the data area for older multixacts. (If autovacuum is not able to keep up, or there are errors in vacuuming, the actual maximum is multixact_freeeze_max_age instead, at which point multixact generation is stopped completely. The default value for this limit is 400 million, which means that the multixact size that would cause trouble is about 10 members). Initial bug report by Timothy Garnett, bug #12990 Backpatch to 9.3, where the problem was introduced. Authors: Álvaro Herrera, Thomas Munro Reviews: Thomas Munro, Amit Kapila, Robert Haas, Kevin Grittner 28 April 2015, 14:32:53 UTC
723613e Build libecpg with -DFRONTEND in all supported versions. Fix an oversight in commit 151e74719b0cc5c040bd3191b51b95f925773dd1 by back-patching commit 44c5d387eafb4ba1a032f8d7b13d85c553d69181 to 9.0. 26 April 2015, 21:20:10 UTC
3e47d0b Prevent improper reordering of antijoins vs. outer joins. An outer join appearing within the RHS of an antijoin can't commute with the antijoin, but somehow I missed teaching make_outerjoininfo() about that. In Teodor Sigaev's recent trouble report, this manifests as a "could not find RelOptInfo for given relids" error within eqjoinsel(); but I think silently wrong query results are possible too, if the planner misorders the joins and doesn't happen to trigger any internal consistency checks. It's broken as far back as we had antijoins, so back-patch to all supported branches. 25 April 2015, 20:44:27 UTC
05c1392 Build every ECPG library with -DFRONTEND. Each of the libraries incorporates src/port files, which often check FRONTEND. Build systems disagreed on whether to build libpgtypes this way. Only libecpg incorporates files that rely on it today. Back-patch to 9.0 (all supported versions) to forestall surprises. 24 April 2015, 23:29:24 UTC
c82e13a Fix obsolete comment in set_rel_size(). The cross-reference to set_append_rel_pathlist() was obsoleted by commit e2fa76d80ba571d4de8992de6386536867250474, which split what had been set_rel_pathlist() and child routines into two sets of functions. But I (tgl) evidently missed updating this comment. Back-patch to 9.2 to avoid unnecessary divergence among branches. Amit Langote 24 April 2015, 19:18:43 UTC
f73ebd7 Fix deadlock at startup, if max_prepared_transactions is too small. When the startup process recovers transactions by scanning pg_twophase directory, it should clear MyLockedGxact after it's done processing each transaction. Like we do during normal operation, at PREPARE TRANSACTION. Otherwise, if the startup process exits due to an error, it will try to clear the locking_backend field of the last recovered transaction. That's usually harmless, but if the error happens in MarkAsPreparing, while holding TwoPhaseStateLock, the shmem-exit hook will try to acquire TwoPhaseStateLock again, and deadlock with itself. This fixes bug #13128 reported by Grant McAlister. The bug was introduced by commit bb38fb0d, so backpatch to all supported versions like that commit. 23 April 2015, 18:36:24 UTC
7954bc5 Fix typo in comment SLRU_SEGMENTS_PER_PAGE -> SLRU_PAGES_PER_SEGMENT I introduced this ancient typo in subtrans.c and later propagated it to multixact.c. I fixed the latter in f741300c, but only back to 9.3; backpatch to all supported branches for consistency. 14 April 2015, 15:12:18 UTC
a800267 Don't archive bogus recycled or preallocated files after timeline switch. After a timeline switch, we would leave behind recycled WAL segments that are in the future, but on the old timeline. After promotion, and after they become old enough to be recycled again, we would notice that they don't have a .ready or .done file, create a .ready file for them, and archive them. That's bogus, because the files contain garbage, recycled from an older timeline (or prealloced as zeros). We shouldn't archive such files. This could happen when we're following a timeline switch during replay, or when we switch to new timeline at end-of-recovery. To fix, whenever we switch to a new timeline, scan the data directory for WAL segments on the old timeline, but with a higher segment number, and remove them. Those don't belong to our timeline history, and are most likely bogus recycled or preallocated files. They could also be valid files that we streamed from the primary ahead of time, but in any case, they're not needed to recover to the new timeline. 13 April 2015, 14:22:35 UTC
8dfddf1 Remove duplicated words in comments. David Rowley 12 April 2015, 07:49:34 UTC
3b4da9a Fix incorrect punctuation Amit Langote 09 April 2015, 11:36:07 UTC
0d6c9e0 Fix autovacuum launcher shutdown sequence It was previously possible to have the launcher re-execute its main loop before shutting down if some other signal was received or an error occurred after getting SIGTERM, as reported by Qingqing Zhou. While investigating, Tom Lane further noticed that if autovacuum had been disabled in the config file, it would misbehave by trying to start a new worker instead of bailing out immediately -- it would consider itself as invoked in emergency mode. Fix both problems by checking the shutdown flag in a few more places. These problems have existed since autovacuum was introduced, so backpatch all the way back. 08 April 2015, 16:19:49 UTC
b1145ca Fix assorted inconsistent function declarations. While gcc doesn't complain if you declare a function "static" and then define it not-static, other compilers do; and in any case the code is highly misleading this way. Add the missing "static" keywords to a couple of recent patches. Per buildfarm member pademelon. 07 April 2015, 20:56:21 UTC
4e3b1e2 Fix typo in libpq.sgml. Back-patch to all supported versions. Michael Paquier 06 April 2015, 03:17:24 UTC
6347bdb Suppress clang's unhelpful gripes about -pthread switch being unused. Considering the number of cases in which "unused" command line arguments are silently ignored by compilers, it's fairly astonishing that anybody thought this warning was useful; it's certainly nothing but an annoyance when building Postgres. One such case is that neither gcc nor clang complain about unrecognized -Wno-foo switches, making it more difficult to figure out whether the switch does anything than one could wish. Back-patch to 9.3, which is as far back as the patch applies conveniently (we'd have to back-patch PGAC_PROG_CC_VAR_OPT to go further, and it doesn't seem worth that). 05 April 2015, 17:01:55 UTC
e105df2 Fix incorrect matching of subexpressions in outer-join plan nodes. Previously we would re-use input subexpressions in all expression trees attached to a Join plan node. However, if it's an outer join and the subexpression appears in the nullable-side input, this is potentially incorrect for apparently-matching subexpressions that came from above the outer join (ie, targetlist and qpqual expressions), because the executor will treat the subexpression value as NULL when maybe it should not be. The case is fairly hard to hit because (a) you need a non-strict subexpression (else NULL is correct), and (b) we don't usually compute expressions in the outputs of non-toplevel plan nodes. But we might do so if the expressions are sort keys for a mergejoin, for example. Probably in the long run we should make a more explicit distinction between Vars appearing above and below an outer join, but that will be a major planner redesign and not at all back-patchable. For the moment, just hack set_join_references so that it will not match any non-Var expressions coming from nullable inputs to expressions that came from above the join. (This is somewhat overkill, in that a strict expression could still be matched, but it doesn't seem worth the effort to check that.) Per report from Qingqing Zhou. The added regression test case is based on his example. This has been broken for a very long time, so back-patch to all active branches. 04 April 2015, 23:55:15 UTC
cbccaf2 Remove unnecessary variables in _hash_splitbucket(). Commit ed9cc2b5df59fdbc50cce37399e26b03ab2c1686 made it unnecessary to pass start_nblkno to _hash_splitbucket(), and for that matter unnecessary to have the internal nblkno variable either. My compiler didn't complain about that, but some did. I also rearranged the use of oblkno a bit to make that case more parallel. Report and initial patch by Petr Jelinek, rearranged a bit by me. Back-patch to all branches, like the previous patch. 03 April 2015, 20:49:11 UTC
f4540ca psql: fix \connect with URIs and conninfo strings psql was already accepting conninfo strings as the first parameter in \connect, but the way it worked wasn't sane; some of the other parameters would get the previous connection's values, causing it to connect to a completely unexpected server or, more likely, not finding any server at all because of completely wrong combinations of parameters. Fix by explicitely checking for a conninfo-looking parameter in the dbname position; if one is found, use its complete specification rather than mix with the other arguments. Also, change tab-completion to not try to complete conninfo/URI-looking "dbnames" and document that conninfos are accepted as first argument. There was a weak consensus to backpatch this, because while the behavior of using the dbname as a conninfo is nowhere documented for \connect, it is reasonable to expect that it works because it does work in many other contexts. Therefore this is backpatched all the way back to 9.0. To implement this, routines previously private to libpq have been duplicated so that psql can decide what looks like a conninfo/URI string. In back branches, just duplicate the same code all the way back to 9.2, where URIs where introduced; 9.0 and 9.1 have a simpler version. In master, the routines are moved to src/common and renamed. Author: David Fetter, Andrew Dunstan. Some editorialization by me (probably earning a Gierth's "Sloppy" badge in the process.) Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost, Robert Haas, Andrew Dunstan. 01 April 2015, 23:00:07 UTC
44f8f56 Fix incorrect markup in documentation of window frame clauses. You're required to write either RANGE or ROWS to start a frame clause, but the documentation incorrectly implied this is optional. Noted by David Johnston. 01 April 2015, 00:03:50 UTC
9f06729 Remove spurious semicolons. Petr Jelinek 31 March 2015, 12:15:04 UTC
0904eb3 Run pg_upgrade and pg_resetxlog with restricted token on Windows As with initdb these programs need to run with a restricted token, and if they don't pg_upgrade will fail when run as a user with Adminstrator privileges. Backpatch to all live branches. On the development branch the code is reorganized so that the restricted token code is now in a single location. On the stable bramches a less invasive change is made by simply copying the relevant code to pg_upgrade.c and pg_resetxlog.c. Patches and bug report from Muhammad Asif Naeem, reviewed by Michael Paquier, slightly edited by me. 30 March 2015, 21:17:17 UTC
246bbf6 Fix bogus concurrent use of _hash_getnewbuf() in bucket split code. _hash_splitbucket() obtained the base page of the new bucket by calling _hash_getnewbuf(), but it held no exclusive lock that would prevent some other process from calling _hash_getnewbuf() at the same time. This is contrary to _hash_getnewbuf()'s API spec and could in fact cause failures. In practice, we must only call that function while holding write lock on the hash index's metapage. An additional problem was that we'd already modified the metapage's bucket mapping data, meaning that failure to extend the index would leave us with a corrupt index. Fix both issues by moving the _hash_getnewbuf() call to just before we modify the metapage in _hash_expandtable(). Unfortunately there's still a large problem here, which is that we could also incur ENOSPC while trying to get an overflow page for the new bucket. That would leave the index corrupt in a more subtle way, namely that some index tuples that should be in the new bucket might still be in the old one. Fixing that seems substantially more difficult; even preallocating as many pages as we could possibly need wouldn't entirely guarantee that the bucket split would complete successfully. So for today let's just deal with the base case. Per report from Antonin Houska. Back-patch to all active branches. 30 March 2015, 20:40:05 UTC
995a664 Add vacuum_delay_point call in compute_index_stats's per-sample-row loop. Slow functions in index expressions might cause this loop to take long enough to make it worth being cancellable. Probably it would be enough to call CHECK_FOR_INTERRUPTS here, but for consistency with other per-sample-row loops in this file, let's use vacuum_delay_point. Report and patch by Jeff Janes. Back-patch to all supported branches. 29 March 2015, 19:04:24 UTC
56abebb Make SyncRepWakeQueue to a static function It is only used in src/backend/replication/syncrep.c. Back-patch to all supported branches except 9.1 which declares the function as static. 26 March 2015, 01:39:18 UTC
7cd5498 Fix ExecOpenScanRelation to take a lock on a ROW_MARK_COPY relation. ExecOpenScanRelation assumed that any relation listed in the ExecRowMark list has been locked by InitPlan; but this is not true if the rel's markType is ROW_MARK_COPY, which is possible if it's a foreign table. In most (possibly all) cases, failure to acquire a lock here isn't really problematic because the parser, planner, or plancache would have taken the appropriate lock already. In principle though it might leave us vulnerable to working with a relation that we hold no lock on, and in any case if the executor isn't depending on previously-taken locks otherwise then it should not do so for ROW_MARK_COPY relations. Noted by Etsuro Fujita. Back-patch to all active versions, since the inconsistency has been there a long time. (It's almost certainly irrelevant in 9.0, since that predates foreign tables, but the code's still wrong on its own terms.) 24 March 2015, 19:53:06 UTC
83587a0 Replace insertion sort in contrib/intarray with qsort(). It's all very well to claim that a simplistic sort is fast in easy cases, but O(N^2) in the worst case is not good ... especially if the worst case is as easy to hit as "descending order input". Replace that bit with our standard qsort. Per bug #12866 from Maksym Boguk. Back-patch to all active branches. 16 March 2015, 03:22:03 UTC
2cb76fa Remove workaround for ancient incompatibility between readline and libedit. GNU readline defines the return value of write_history() as "zero if OK, else an errno code". libedit's version of that function used to have a different definition (to wit, "-1 if error, else the number of lines written to the file"). We tried to work around that by checking whether errno had become nonzero, but this method has never been kosher according to the published API of either library. It's reportedly completely broken in recent Ubuntu releases: psql bleats about "No such file or directory" when saving ~/.psql_history, even though the write worked fine. However, libedit has been following the readline definition since somewhere around 2006, so it seems all right to finally break compatibility with ancient libedit releases and trust that the return value is what readline specifies. (I'm not sure when the various Linux distributions incorporated this fix, but I did find that OS X has been shipping fixed versions since 10.5/Leopard.) If anyone is still using such an ancient libedit, they will find that psql complains it can't write ~/.psql_history at exit, even when the file was written correctly. This is no worse than the behavior we're fixing for current releases. Back-patch to all supported branches. 14 March 2015, 17:43:13 UTC
089b371 Fix integer overflow in debug message of walreceiver The message tries to tell the replication apply delay which fails if the first WAL record is not applied yet. Fix is, instead of telling overflowed minus numeric, showing "N/A" which indicates that the delay data is not yet available. Problem reported by me and patch by Fabrízio de Royes Mello. Back patched to 9.4, 9.3 and 9.2 stable branches (9.1 and 9.0 do not have the debug message). 13 March 2015, 23:21:56 UTC
5bdf3cf Ensure tableoid reads correctly in EvalPlanQual-manufactured tuples. The ROW_MARK_COPY path in EvalPlanQualFetchRowMarks() was just setting tableoid to InvalidOid, I think on the assumption that the referenced RTE must be a subquery or other case without a meaningful OID. However, foreign tables also use this code path, and they do have meaningful table OIDs; so failure to set the tuple field can lead to user-visible misbehavior. Fix that by fetching the appropriate OID from the range table. There's still an issue about whether CTID can ever have a meaningful value in this case; at least with postgres_fdw foreign tables, it does. But that is a different problem that seems to require a significantly different patch --- it's debatable whether postgres_fdw really wants to use this code path at all. Simplified version of a patch by Etsuro Fujita, who also noted the problem to begin with. The issue can be demonstrated in all versions having FDWs, so back-patch to 9.1. 12 March 2015, 17:38:49 UTC
d16d821 Cast to (void *) rather than (int *) when passing int64's to PQfn(). This is a possibly-vain effort to silence a Coverity warning about bogus endianness dependency. The code's fine, because it takes care of endianness issues for itself, but Coverity sees an int64 being passed to an int* argument and not unreasonably suspects something's wrong. I'm not sure if putting the void* cast in the way will shut it up; but it can't hurt and seems better from a documentation standpoint anyway, since the pointer is not used as an int* in this code path. Just for a bit of additional safety, verify that the result length is 8 bytes as expected. Back-patch to 9.3 where the code in question was added. 08 March 2015, 17:58:39 UTC
9937f6e Fix documentation for libpq's PQfn(). The SGML docs claimed that 1-byte integers could be sent or received with the "isint" options, but no such behavior has ever been implemented in pqGetInt() or pqPutInt(). The in-code documentation header for PQfn() was even less in tune with reality, and the code itself used parameter names matching neither the SGML docs nor its libpq-fe.h declaration. Do a bit of additional wordsmithing on the SGML docs while at it. Since the business about 1-byte integers is a clear documentation bug, back-patch to all supported branches. 08 March 2015, 17:35:41 UTC
d645273 Rethink function argument sorting in pg_dump. Commit 7b583b20b1c95acb621c71251150beef958bb603 created an unnecessary dump failure hazard by applying pg_get_function_identity_arguments() to every function in the database, even those that won't get dumped. This could result in snapshot-related problems if concurrent sessions are, for example, creating and dropping temporary functions, as noted by Marko Tiikkaja in bug #12832. While this is by no means pg_dump's only such issue with concurrent DDL, it's unfortunate that we added a new failure mode for cases that used to work, and even more so that the failure was created for basically cosmetic reasons (ie, to sort overloaded functions more deterministically). To fix, revert that patch and instead sort function arguments using information that pg_dump has available anyway, namely the names of the argument types. This will produce a slightly different sort ordering for overloaded functions than the previous coding; but applying strcmp directly to the output of pg_get_function_identity_arguments really was a bit odd anyway. The sorting will still be name-based and hence independent of possibly-installation-specific OID assignments. A small additional benefit is that sorting now works regardless of server version. Back-patch to 9.3, where the previous commit appeared. 06 March 2015, 18:27:46 UTC
49bb343 Fix contrib/file_fdw's expected file I forgot to update it on yesterday's cf34e373fcf. 06 March 2015, 14:47:09 UTC
5cf4000 Fix user mapping object description We were using "user mapping for user XYZ" as description for user mappings, but that's ambiguous because users can have mappings on multiple foreign servers; therefore change it to "for user XYZ on server UVW" instead. Object identities for user mappings are also updated in the same way, in branches 9.3 and above. The incomplete description string was introduced together with the whole SQL/MED infrastructure by commit cae565e503 of 8.4 era, so backpatch all the way back. 05 March 2015, 21:03:16 UTC
73f236f Add comment for "is_internal" parameter This was missed in my commit f4c4335 of 9.3 vintage, so backpatch to that. 03 March 2015, 17:04:34 UTC
43d81f1 Fix pg_dump handling of extension config tables Since 9.1, we've provided extensions with a way to denote "configuration" tables- tables created by an extension which the user may modify. By marking these as "configuration" tables, the extension is asking for the data in these tables to be pg_dump'd (tables which are not marked in this way are assumed to be entirely handled during CREATE EXTENSION and are not included at all in a pg_dump). Unfortunately, pg_dump neglected to consider foreign key relationships between extension configuration tables and therefore could end up trying to reload the data in an order which would cause FK violations. This patch teaches pg_dump about these dependencies, so that the data dumped out is done so in the best order possible. Note that there's no way to handle circular dependencies, but those have yet to be seen in the wild. The release notes for this should include a caution to users that existing pg_dump-based backups may be invalid due to this issue. The data is all there, but restoring from it will require extracting the data for the configuration tables and then loading them in the correct order by hand. Discussed initially back in bug #6738, more recently brought up by Gilles Darold, who provided an initial patch which was further reworked by Michael Paquier. Further modifications and documentation updates by me. Back-patch to 9.1 where we added the concept of extension configuration tables. 02 March 2015, 19:12:33 UTC
585f16d Unlink static libraries before rebuilding them. When the library already exists in the build directory, "ar" preserves members not named on its command line. This mattered when, for example, a "configure" rerun dropped a file from $(LIBOBJS). libpgport carried the obsolete member until "make clean". Back-patch to 9.0 (all supported versions). 01 March 2015, 18:06:39 UTC
1b55878 Fix planning of star-schema-style queries. Part of the intent of the parameterized-path mechanism was to handle star-schema queries efficiently, but some overly-restrictive search limiting logic added in commit e2fa76d80ba571d4de8992de6386536867250474 prevented such cases from working as desired. Fix that and add a regression test about it. Per gripe from Marc Cousin. This is arguably a bug rather than a new feature, so back-patch to 9.2 where parameterized paths were introduced. 28 February 2015, 17:43:04 UTC
abce8dc Reconsider when to wait for WAL flushes/syncrep during commit. Up to now RecordTransactionCommit() waited for WAL to be flushed (if synchronous_commit != off) and to be synchronously replicated (if enabled), even if a transaction did not have a xid assigned. The primary reason for that is that sequence's nextval() did not assign a xid, but are worthwhile to wait for on commit. This can be problematic because sometimes read only transactions do write WAL, e.g. HOT page prune records. That then could lead to read only transactions having to wait during commit. Not something people expect in a read only transaction. This lead to such strange symptoms as backends being seemingly stuck during connection establishment when all synchronous replicas are down. Especially annoying when said stuck connection is the standby trying to reconnect to allow syncrep again... This behavior also is involved in a rather complicated <= 9.4 bug where the transaction started by catchup interrupt processing waited for syncrep using latches, but didn't get the wakeup because it was already running inside the same overloaded signal handler. Fix the issue here doesn't properly solve that issue, merely papers over the problems. In 9.5 catchup interrupts aren't processed out of signal handlers anymore. To fix all this, make nextval() acquire a top level xid, and only wait for transaction commit if a transaction both acquired a xid and emitted WAL records. If only a xid has been assigned we don't uselessly want to wait just because of writes to temporary/unlogged tables; if only WAL has been written we don't want to wait just because of HOT prunes. The xid assignment in nextval() is unlikely to cause overhead in real-world workloads. For one it only happens SEQ_LOG_VALS/32 values anyway, for another only usage of nextval() without using the result in an insert or similar is affected. Discussion: 20150223165359.GF30784@awork2.anarazel.de, 369698E947874884A77849D8FE3680C2@maumau, 5CF4ABBA67674088B3941894E22A0D25@maumau Per complaint from maumau and Thom Brown Backpatch all the way back; 9.0 doesn't have syncrep, but it seems better to be consistent behavior across all maintained branches. 26 February 2015, 11:50:07 UTC
4651e37 Free SQLSTATE and SQLERRM no earlier than other PL/pgSQL variables. "RETURN SQLERRM" prompted plpgsql_exec_function() to read from freed memory. Back-patch to 9.0 (all supported versions). Little code ran between the premature free and the read, so non-assert builds are unlikely to witness user-visible consequences. 26 February 2015, 04:48:54 UTC
f864fe0 Fix dumping of views that are just VALUES(...) but have column aliases. The "simple" path for printing VALUES clauses doesn't work if we need to attach nondefault column aliases, because there's noplace to do that in the minimal VALUES() syntax. So modify get_simple_values_rte() to detect nondefault aliases and treat that as a non-simple case. This further exposes that the "non-simple" path never actually worked; it didn't produce valid syntax. Fix that too. Per bug #12789 from Curtis McEnroe, and analysis by Andrew Gierth. Back-patch to all supported branches. Before 9.3, this also requires back-patching the part of commit 092d7ded29f36b0539046b23b81b9f0bf2d637f1 that created get_simple_values_rte() to begin with; inserting the extra test into the old factorization of that logic would've been too messy. 25 February 2015, 17:01:12 UTC
a6ddff8 Guard against spurious signals in LockBufferForCleanup. When LockBufferForCleanup() has to wait for getting a cleanup lock on a buffer it does so by setting a flag in the buffer header and then wait for other backends to signal it using ProcWaitForSignal(). Unfortunately LockBufferForCleanup() missed that ProcWaitForSignal() can return for other reasons than the signal it is hoping for. If such a spurious signal arrives the wait flags on the buffer header will still be set. That then triggers "ERROR: multiple backends attempting to wait for pincount 1". The fix is simple, unset the flag if still set when retrying. That implies an additional spinlock acquisition/release, but that's unlikely to matter given the cost of waiting for a cleanup lock. Alternatively it'd have been possible to move responsibility for maintaining the relevant flag to the waiter all together, but that might have had negative consequences due to possible floods of signals. Besides being more invasive. This looks to be a very longstanding bug. The relevant code in LockBufferForCleanup() hasn't changed materially since its introduction and ProcWaitForSignal() was documented to return for unrelated reasons since 8.2. The master only patch series removing ImmediateInterruptOK made it much easier to hit though, as ProcSendSignal/ProcWaitForSignal now uses a latch shared with other tasks. Per discussion with Kevin Grittner, Tom Lane and me. Backpatch to all supported branches. Discussion: 11553.1423805224@sss.pgh.pa.us 23 February 2015, 15:14:15 UTC
cdf813c Fix potential deadlock with libpq non-blocking mode. If libpq output buffer is full, pqSendSome() function tries to drain any incoming data. This avoids deadlock, if the server e.g. sends a lot of NOTICE messages, and blocks until we read them. However, pqSendSome() only did that in blocking mode. In non-blocking mode, the deadlock could still happen. To fix, take a two-pronged approach: 1. Change the documentation to instruct that when PQflush() returns 1, you should wait for both read- and write-ready, and call PQconsumeInput() if it becomes read-ready. That fixes the deadlock, but applications are not going to change overnight. 2. In pqSendSome(), drain the input buffer before returning 1. This alleviates the problem for applications that only wait for write-ready. In particular, a slow but steady stream of NOTICE messages during COPY FROM STDIN will no longer cause a deadlock. The risk remains that the server attempts to send a large burst of data and fills its output buffer, and at the same time the client also sends enough data to fill its output buffer. The application will deadlock if it goes to sleep, waiting for the socket to become write-ready, before the server's data arrives. In practice, NOTICE messages and such that the server might be sending are usually short, so it's highly unlikely that the server would fill its output buffer so quickly. Backpatch to all supported versions. 23 February 2015, 11:32:42 UTC
f389b6e Fix misparsing of empty value in conninfo_uri_parse_params(). After finding an "=" character, the pointer was advanced twice when it should only advance once. This is harmless as long as the value after "=" has at least one character; but if it doesn't, we'd miss the terminator character and include too much in the value. In principle this could lead to reading off the end of memory. It does not seem worth treating as a security issue though, because it would happen on client side, and besides client logic that's taking conninfo strings from untrusted sources has much worse security problems than this. Report and patch received off-list from Thomas Fanghaenel. Back-patch to 9.2 where the faulty code was introduced. 21 February 2015, 17:59:39 UTC
a196e67 Fix object identities for pg_conversion objects We were neglecting to schema-qualify them. Backpatch to 9.3, where object identities were introduced as a concept by commit f8348ea32ec8. 18 February 2015, 17:28:12 UTC
a7ad5cf Fix failure to honor -Z compression level option in pg_dump -Fd. cfopen() and cfopen_write() failed to pass the compression level through to zlib, so that you always got the default compression level if you got any at all. In passing, also fix these and related functions so that the correct errno is reliably returned on failure; the original coding supposes that free() cannot change errno, which is untrue on at least some platforms. Per bug #12779 from Christoph Berg. Back-patch to 9.1 where the faulty code was introduced. Michael Paquier 18 February 2015, 16:43:00 UTC
4ea2d2d Remove code to match IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses. In investigating yesterday's crash report from Hugo Osvaldo Barrera, I only looked back as far as commit f3aec2c7f51904e7 where the breakage occurred (which is why I thought the IPv4-in-IPv6 business was undocumented). But actually the logic dates back to commit 3c9bb8886df7d56a and was simply broken by erroneous refactoring in the later commit. A bit of archives excavation shows that we added the whole business in response to a report that some 2003-era Linux kernels would report IPv4 connections as having IPv4-in-IPv6 addresses. The fact that we've had no complaints since 9.0 seems to be sufficient confirmation that no modern kernels do that, so let's just rip it all out rather than trying to fix it. Do this in the back branches too, thus essentially deciding that our effective behavior since 9.0 is correct. If there are any platforms on which the kernel reports IPv4-in-IPv6 addresses as such, yesterday's fix would have made for a subtle and potentially security-sensitive change in the effective meaning of IPv4 pg_hba.conf entries, which does not seem like a good thing to do in minor releases. So let's let the post-9.0 behavior stand, and change the documentation to match it. In passing, I failed to resist the temptation to wordsmith the description of pg_hba.conf IPv4 and IPv6 address entries a bit. A lot of this text hasn't been touched since we were IPv4-only. 17 February 2015, 17:49:18 UTC
9a90ec9 Improve pg_check_dir code and comments. Avoid losing errno if readdir() fails and closedir() works. Consistently return 4 rather than 3 if both a lost+found directory and other files are found, rather than returning one value or the other depending on the order of the directory listing. Update comments to match the actual behavior. These oversights date to commits 6f03927fce038096f53ca67eeab9adb24938f8a6 and 17f15239325a88581bb4f9cf91d38005f1f52d69. Marco Nenciarini 17 February 2015, 15:54:29 UTC
7bc6e59 Fix misuse of memcpy() in check_ip(). The previous coding copied garbage into a local variable, pretty much ensuring that the intended test of an IPv6 connection address against a promoted IPv4 address from pg_hba.conf would never match. The lack of field complaints likely indicates that nobody realized this was supposed to work, which is unsurprising considering that no user-facing docs suggest it should work. In principle this could have led to a SIGSEGV due to reading off the end of memory, but since the source address would have pointed to somewhere in the function's stack frame, that's quite unlikely. What led to discovery of the bug is Hugo Osvaldo Barrera's report of a crash after an OS upgrade, which is probably because he is now running a system in which memcpy raises abort() upon detecting overlapping source and destination areas. (You'd have to additionally suppose some things about the stack frame layout to arrive at this conclusion, but it seems plausible.) This has been broken since the code was added, in commit f3aec2c7f51904e7, so back-patch to all supported branches. 16 February 2015, 21:18:04 UTC
4662ba5 Fix null-pointer-deref crash while doing COPY IN with check constraints. In commit bf7ca15875988a88e97302e012d7c4808bef3ea9 I introduced an assumption that an RTE referenced by a whole-row Var must have a valid eref field. This is false for RTEs constructed by DoCopy, and there are other places taking similar shortcuts. Perhaps we should make all those places go through addRangeTableEntryForRelation or its siblings instead of having ad-hoc logic, but the most reliable fix seems to be to make the new code in ExecEvalWholeRowVar cope if there's no eref. We can reasonably assume that there's no need to insert column aliases if no aliases were provided. Add a regression test case covering this, and also verifying that a sane column name is in fact available in this situation. Although the known case only crashes in 9.4 and HEAD, it seems prudent to back-patch the code change to 9.2, since all the ingredients for a similar failure exist in the variant patch applied to 9.3 and 9.2. Per report from Jean-Pierre Pelletier. 16 February 2015, 04:26:46 UTC
d5f70a2 pg_regress: Write processed input/*.source into output dir Before, it was writing the processed files into the input directory, which is incorrect in a vpath build. 15 February 2015, 06:20:57 UTC
6ef5d89 Fix broken #ifdef for __sparcv8 Rob Rowan. Backpatch to all supported versions, like the patch that added the broken #ifdef. 13 February 2015, 21:57:05 UTC
9ecd51d pg_upgrade: quote directory names in delete_old_cluster script This allows the delete script to properly function when special characters appear in directory paths, e.g. spaces. Backpatch through 9.0 12 February 2015, 03:06:04 UTC
e20523f pg_upgrade: preserve freeze info for postgres/template1 dbs pg_database.datfrozenxid and pg_database.datminmxid were not preserved for the 'postgres' and 'template1' databases. This could cause missing clog file errors on access to user tables and indexes after upgrades in these databases. Backpatch through 9.0 12 February 2015, 02:02:28 UTC
734bbf2 Fix missing PQclear() in libpqrcv_endstreaming(). This omission leaked one PGresult per WAL streaming cycle, which possibly would never be enough to notice in the real world, but it's still a leak. Per Coverity. Back-patch to 9.3 where the error was introduced. 12 February 2015, 00:20:49 UTC
bcf2dec Fix minor memory leak in ident_inet(). We'd leak the ident_serv data structure if the second pg_getaddrinfo_all (the one for the local address) failed. This is not of great consequence because a failure return here just leads directly to backend exit(), but if this function is going to try to clean up after itself at all, it should not have such holes in the logic. Try to fix it in a future-proof way by having all the failure exits go through the same cleanup path, rather than "optimizing" some of them. Per Coverity. Back-patch to 9.2, which is as far back as this patch applies cleanly. 12 February 2015, 00:09:54 UTC
5ea8cfe Fix more memory leaks in failure path in buildACLCommands. We already had one go at this issue in commit d73b7f973db5ec7e, but we failed to notice that buildACLCommands also leaked several PQExpBuffers along with a simply malloc'd string. This time let's try to make the fix a bit more future-proof by eliminating the separate exit path. It's still not exactly critical because pg_dump will curl up and die on failure; but since the amount of the potential leak is now several KB, it seems worth back-patching as far as 9.2 where the previous fix landed. Per Coverity, which evidently is smarter than clang's static analyzer. 11 February 2015, 23:35:23 UTC
1a321fe Fixed array handling in ecpg. When ecpg was rewritten to the new protocol version not all variable types were corrected. This patch rewrites the code for these types to fix that. It also fixes the documentation to correctly tell the status of array handling. 11 February 2015, 10:13:11 UTC
a4e871c Fix pg_dump's heuristic for deciding which casts to dump. Back in 2003 we had a discussion about how to decide which casts to dump. At the time pg_dump really only considered an object's containing schema to decide what to dump (ie, dump whatever's not in pg_catalog), and so we chose a complicated idea involving whether the underlying types were to be dumped (cf commit a6790ce85752b67ad994f55fdf1a450262ccc32e). But users are allowed to create casts between built-in types, and we failed to dump such casts. Let's get rid of that heuristic, which has accreted even more ugliness since then, in favor of just looking at the cast's OID to decide if it's a built-in cast or not. In passing, also fix some really ancient code that supposed that it had to manufacture a dependency for the cast on its cast function; that's only true when dumping from a pre-7.3 server. This just resulted in some wasted cycles and duplicate dependency-list entries with newer servers, but we might as well improve it. Per gripes from a number of people, most recently Greg Sabino Mullane. Back-patch to all supported branches. 11 February 2015, 03:38:20 UTC
672abc4 Fix GEQO to not assume its join order heuristic always works. Back in commit 400e2c934457bef4bc3cc9a3e49b6289bd761bc0 I rewrote GEQO's gimme_tree function to improve its heuristic for modifying the given tour into a legal join order. In what can only be called a fit of hubris, I supposed that this new heuristic would *always* find a legal join order, and ripped out the old logic that allowed gimme_tree to sometimes fail. The folly of this is exposed by bug #12760, in which the "greedy" clumping behavior of merge_clump() can lead it into a dead end which could only be recovered from by un-clumping. We have no code for that and wouldn't know exactly what to do with it if we did. Rather than try to improve the heuristic rules still further, let's just recognize that it *is* a heuristic and probably must always have failure cases. So, put back the code removed in the previous commit to allow for failure (but comment it a bit better this time). It's possible that this code was actually fully correct at the time and has only been broken by the introduction of LATERAL. But having seen this example I no longer have much faith in that proposition, so back-patch to all supported branches. 11 February 2015, 01:37:24 UTC
5f0ba4a Report WAL flush, not insert, position in replication IDENTIFY_SYSTEM When beginning streaming replication, the client usually issues the IDENTIFY_SYSTEM command, which used to return the current WAL insert position. That's not suitable for the intended purpose of that field, however. pg_receivexlog uses it to start replication from the reported point, but if it hasn't been flushed to disk yet, it will fail. Change IDENTIFY_SYSTEM to report the flush position instead. Backpatch to 9.1 and above. 9.0 doesn't report any WAL position. 06 February 2015, 09:32:16 UTC
f0241d6 Add missing float.h include to snprintf.c. On windows _isnan() (which isnan() is redirected to in port/win32.h) is declared in float.h, not math.h. Per buildfarm animal currawong. Backpatch to all supported branches. 04 February 2015, 12:31:40 UTC
b5ea07b Stamp 9.3.6. 02 February 2015, 20:43:50 UTC
0a819b6 Last-minute updates for release notes. Add entries for security issues. Security: CVE-2015-0241 through CVE-2015-0244 02 February 2015, 16:24:05 UTC
cd19848 Be more careful to not lose sync in the FE/BE protocol. If any error occurred while we were in the middle of reading a protocol message from the client, we could lose sync, and incorrectly try to interpret a part of another message as a new protocol message. That will usually lead to an "invalid frontend message" error that terminates the connection. However, this is a security issue because an attacker might be able to deliberately cause an error, inject a Query message in what's supposed to be just user data, and have the server execute it. We were quite careful to not have CHECK_FOR_INTERRUPTS() calls or other operations that could ereport(ERROR) in the middle of processing a message, but a query cancel interrupt or statement timeout could nevertheless cause it to happen. Also, the V2 fastpath and COPY handling were not so careful. It's very difficult to recover in the V2 COPY protocol, so we will just terminate the connection on error. In practice, that's what happened previously anyway, as we lost protocol sync. To fix, add a new variable in pqcomm.c, PqCommReadingMsg, that is set whenever we're in the middle of reading a message. When it's set, we cannot safely ERROR out and continue running, because we might've read only part of a message. PqCommReadingMsg acts somewhat similarly to critical sections in that if an error occurs while it's set, the error handler will force the connection to be terminated, as if the error was FATAL. It's not implemented by promoting ERROR to FATAL in elog.c, like ERROR is promoted to PANIC in critical sections, because we want to be able to use PG_TRY/CATCH to recover and regain protocol sync. pq_getmessage() takes advantage of that to prevent an OOM error from terminating the connection. To prevent unnecessary connection terminations, add a holdoff mechanism similar to HOLD/RESUME_INTERRUPTS() that can be used hold off query cancel interrupts, but still allow die interrupts. The rules on which interrupts are processed when are now a bit more complicated, so refactor ProcessInterrupts() and the calls to it in signal handlers so that the signal handlers always call it if ImmediateInterruptOK is set, and ProcessInterrupts() can decide to not do anything if the other conditions are not met. Reported by Emil Lenngren. Patch reviewed by Noah Misch and Andres Freund. Backpatch to all supported versions. Security: CVE-2015-0244 02 February 2015, 15:09:40 UTC
back to top