Age | Commit message (Collapse) | Author |
|
|
|
Use a retry loop very like the code-signing retry loop.
|
|
A test executable on a GitHub Windows runner failed with C00000FD, which
reports stack overflow.
(cherry picked from commit aab7b4ba3812e5876b1205285bcfd8cff96bcac9)
|
|
Upload a new Windows-exe artifact containing just the executable (needed by
BugSplat) separately from the artifact containing the whole NSIS installer.
This requires a new viewer_exe step output set by viewer_manifest.py.
Define viewer_channel and viewer_version as build job outputs.
Set viewer_channel in build.yaml when tag is interpreted.
Set viewer_version in build.sh at the point when it would have posted
viewer_version.txt to codeticket.
Add a post-windows-symbols job dependent on the build job that engages
secondlife/viewer-post-bugsplat-windows, which in turn engages
secondlife/post-bugsplat-windows. We keep the actual upload code in a separate
repo in case we need to modify that code before rerunning to resolve upload
errors. If we kept the upload code in the viewer repo itself, rerunning the
upload with modifications would necessarily require rerunning the viewer
build, which would defeat the purpose of SL-19243.
Because of that new upload job in build.yaml, skip Windows symbol uploads
in build.sh.
Use a simple (platform name) artifact name for metadata because of
flatten_files.py's filename collision resolution.
Use hyphens, not spaces, in remaining artifact names: apparently
download-artifact doesn't much like artifacts with spaces in their names.
Only run the release job when in fact there's a tag. Without that, we get
errors. We need not create flatten_files.py's output directory beforehand
because it will do that implicitly.
|
|
Add DEBUG log output to try to diagnose.
|
|
|
|
to try to avoid "Resource busy" errors from hdiutil.
|
|
LL_USE_SYSTEM_RAND has been disabled since June 2008; that code only clutters
the implementation we actually use.
|
|
|
|
|
|
following promotion of DRTVWR-580
|
|
|
|
Normalize the case of the name of the temp directory for string comparison.
|
|
|
|
Turns out that the pathname of the Python executable wasn't the issue.
This reverts commit 7dc6211ad5ea83685a35c6fff740278343aa8b9d.
|
|
On GitHub Windows runners, trying to make build.yaml set PYTHON=python in the
environment doesn't work: integration tests still fail with "Access is denied"
because they're still trying to execute the interpreter's full pathname.
Instead, make llprocess_test and llleap_test detect the case of GitHub Windows
and override the environment variable PYTHON with a baked-in string constant
"python".
|
|
instead of a new value for each LLProcess::create() invocation.
Since the internal apr_log() function only looks at APR_LOG once per process,
the first test (which succeeded, hence no log file dump) left the log file
open with that same original pathname. Resetting the APR_LOG environment
variable for subsequent runs only made the new code in llprocess_test look for
files that were never created.
|
|
Remove llcommon circular dependency on llfilesystem, which doesn't work for
this case anyway.
|
|
|
|
|
|
|
|
|
|
|
|
Introducing indirection via test_python_script.py did NOT address the "Access
is denied" errors on GitHub Windows runners.
|
|
|
|
|
|
improvements can lead to perceived inventory loss due to cache corruption"
This reverts commit cf692c40b0b9f8d0d04cd10a02a84e3f697a2e99.
|
|
The claim is that the Windows Python interpreter is integrated somehow with
the OS such that a command line that tries to run Python with a script that
"looks suspicious" (i.e. in a system temp directory) fails with "Access
denied" without even loading the interpreter. At least that theory would
explain the "Access denied" errors we've been getting trying to run Python
scripts generated into the system temp directory by our integration tests.
Our hope is that generating such scripts into the GitHub RUNNER_TEMP directory
will work better.
As this test is specific to Windows, don't even bother running Mac builds.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
branch
|
|
Using concatenation appends the intended filename to the parent directory
name, instead of putting the filename in the parent directory.
|
|
|
|
It's cool to be able to write 'arg1 << "stuff" << var ...;' for a lambda
accepting a std::ostream reference, but cascading compile errors mean it's no
longer worth trying to make that work -- given actual C++ lambdas.
Also clean up a lingering BOOST_FOREACH() and a boost::bind() while at it.
|
|
The recommended template uses hyphens; change to underscores to be valid
Python temp module names.
|
|
It seems the problem addressed by aab769e wasn't some synergy between
Boost.Phoenix and Boost.Function, but rather the lack of a Phoenix header file
introducing operator<<().
|
|
|
|
from std::function, since some consumers still use (e.g.)
boost::phoenix::placeholders::arg1 to generate an inline callable.
|
|
On GitHub Windows Actions runners, we're getting permissions errors trying to
tell the Python interpreter to run a NamedTempFile script. Try using
NamedExtTempFile to give each such script a .py extension.
|
|
|
|
do not need 'using' directive, given BOOST_BIND_GLOBAL_PLACEHOLDERS.
|
|
|
|
|
|
On a low-powered GitHub Mac runner, the system doesn't wake up as soon as it
should, and we get spurious "too late" errors. Try a bigger time increment.
|
|
It seems we're no longer implicitly inheriting <array> from some other [set
of] header file[s]. Where we use std::array, bring it in explicitly.
|
|
With VS 2022 on Windows GitHub Actions runners, we can't build apr_suite at
all with the upstream .sln / .vcxproj files, so we had to switch to
"experimental" CMake support. However there's no CMakeLists.txt file for
apr-iconv, so the Windows package omits that library.
|