Age | Commit message (Collapse) | Author |
|
in the path passed as the macOS viewer_exe GitHub output.
|
|
On Mac, in the CMake USE_BUGSPLAT logic, we created both xcarchive.zip (which
is what BugSplat wants to see) and secondlife-symbols-darwin -64.tar.bz2 (which
we don't think is used for anything). The tarball was posted to codeticket --
but why? If the point is to manually re-upload to BugSplat in case of failure,
we'll do better saving xcarchive.zip to codeticket.
For SL-19243, posting xcarchive.zip directly supports the goal of breaking out
the upload to BugSplat as a separate step.
Anyway, since xcarchive.zip is a superset of the tarball, the tarball can be
recreated from the zip file, whereas the zip file can't be recreated from the
tarball without opening the .dmg installer and extracting the viewer executable.
If the xcarchive.zip file exists (that is, on Mac), post that to codeticket or
GitHub, as applicable, instead of the tarball. In fact, in the USE_BUGSPLAT
case, don't even bother creating the tarball since we're going to ignore it.
Make the new build.sh logic that insists on BUGSPLAT_USER and BUGSPLAT_PASS
conditional on BUGSPLAT_DB.
|
|
|
|
|
|
Use a retry loop very like the code-signing retry loop.
|
|
A test executable on a GitHub Windows runner failed with C00000FD, which
reports stack overflow.
(cherry picked from commit aab7b4ba3812e5876b1205285bcfd8cff96bcac9)
|
|
Upload a new Windows-exe artifact containing just the executable (needed by
BugSplat) separately from the artifact containing the whole NSIS installer.
This requires a new viewer_exe step output set by viewer_manifest.py.
Define viewer_channel and viewer_version as build job outputs.
Set viewer_channel in build.yaml when tag is interpreted.
Set viewer_version in build.sh at the point when it would have posted
viewer_version.txt to codeticket.
Add a post-windows-symbols job dependent on the build job that engages
secondlife/viewer-post-bugsplat-windows, which in turn engages
secondlife/post-bugsplat-windows. We keep the actual upload code in a separate
repo in case we need to modify that code before rerunning to resolve upload
errors. If we kept the upload code in the viewer repo itself, rerunning the
upload with modifications would necessarily require rerunning the viewer
build, which would defeat the purpose of SL-19243.
Because of that new upload job in build.yaml, skip Windows symbol uploads
in build.sh.
Use a simple (platform name) artifact name for metadata because of
flatten_files.py's filename collision resolution.
Use hyphens, not spaces, in remaining artifact names: apparently
download-artifact doesn't much like artifacts with spaces in their names.
Only run the release job when in fact there's a tag. Without that, we get
errors. We need not create flatten_files.py's output directory beforehand
because it will do that implicitly.
|
|
Add DEBUG log output to try to diagnose.
|
|
|
|
to try to avoid "Resource busy" errors from hdiutil.
|
|
LL_USE_SYSTEM_RAND has been disabled since June 2008; that code only clutters
the implementation we actually use.
|
|
|
|
|
|
following promotion of DRTVWR-580
|
|
|
|
Normalize the case of the name of the temp directory for string comparison.
|
|
|
|
Turns out that the pathname of the Python executable wasn't the issue.
This reverts commit 7dc6211ad5ea83685a35c6fff740278343aa8b9d.
|
|
On GitHub Windows runners, trying to make build.yaml set PYTHON=python in the
environment doesn't work: integration tests still fail with "Access is denied"
because they're still trying to execute the interpreter's full pathname.
Instead, make llprocess_test and llleap_test detect the case of GitHub Windows
and override the environment variable PYTHON with a baked-in string constant
"python".
|
|
instead of a new value for each LLProcess::create() invocation.
Since the internal apr_log() function only looks at APR_LOG once per process,
the first test (which succeeded, hence no log file dump) left the log file
open with that same original pathname. Resetting the APR_LOG environment
variable for subsequent runs only made the new code in llprocess_test look for
files that were never created.
|
|
Remove llcommon circular dependency on llfilesystem, which doesn't work for
this case anyway.
|
|
|
|
|
|
|
|
|
|
|
|
Introducing indirection via test_python_script.py did NOT address the "Access
is denied" errors on GitHub Windows runners.
|
|
|
|
|
|
improvements can lead to perceived inventory loss due to cache corruption"
This reverts commit cf692c40b0b9f8d0d04cd10a02a84e3f697a2e99.
|
|
The claim is that the Windows Python interpreter is integrated somehow with
the OS such that a command line that tries to run Python with a script that
"looks suspicious" (i.e. in a system temp directory) fails with "Access
denied" without even loading the interpreter. At least that theory would
explain the "Access denied" errors we've been getting trying to run Python
scripts generated into the system temp directory by our integration tests.
Our hope is that generating such scripts into the GitHub RUNNER_TEMP directory
will work better.
As this test is specific to Windows, don't even bother running Mac builds.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
branch
|
|
Using concatenation appends the intended filename to the parent directory
name, instead of putting the filename in the parent directory.
|
|
|
|
It's cool to be able to write 'arg1 << "stuff" << var ...;' for a lambda
accepting a std::ostream reference, but cascading compile errors mean it's no
longer worth trying to make that work -- given actual C++ lambdas.
Also clean up a lingering BOOST_FOREACH() and a boost::bind() while at it.
|
|
The recommended template uses hyphens; change to underscores to be valid
Python temp module names.
|
|
It seems the problem addressed by aab769e wasn't some synergy between
Boost.Phoenix and Boost.Function, but rather the lack of a Phoenix header file
introducing operator<<().
|
|
|
|
from std::function, since some consumers still use (e.g.)
boost::phoenix::placeholders::arg1 to generate an inline callable.
|
|
On GitHub Windows Actions runners, we're getting permissions errors trying to
tell the Python interpreter to run a NamedTempFile script. Try using
NamedExtTempFile to give each such script a .py extension.
|
|
|
|
do not need 'using' directive, given BOOST_BIND_GLOBAL_PLACEHOLDERS.
|
|
|
|
|